2025-04-02 02:26:30,217 [ 285124 ] INFO : ClickHouse root is not set. Will use /home/ubuntu/_work/ClickHouse/ClickHouse (runner:53, check_args_and_update_paths) 2025-04-02 02:26:30,217 [ 285124 ] INFO : Cases dir is not set. Will use /home/ubuntu/_work/ClickHouse/ClickHouse/tests/integration (runner:97, check_args_and_update_paths) 2025-04-02 02:26:30,217 [ 285124 ] INFO : utils dir is not set. Will use /home/ubuntu/_work/ClickHouse/ClickHouse/utils (runner:108, check_args_and_update_paths) 2025-04-02 02:26:30,217 [ 285124 ] INFO : base_configs_dir: /home/ubuntu/_work/ClickHouse/ClickHouse/programs/server, binary: /home/ubuntu/_work/_temp/test/build/clickhouse, cases_dir: /home/ubuntu/_work/ClickHouse/ClickHouse/tests/integration (runner:110, check_args_and_update_paths) clickhouse_integration_tests_volume Running pytest container as: 'docker run --rm --name clickhouse_integration_tests_q0y5cb --privileged --dns-search='.' --memory=30709030912 --security-opt seccomp=unconfined --cap-add=SYS_PTRACE --volume=/home/ubuntu/_work/_temp/test/build/clickhouse-odbc-bridge:/clickhouse-odbc-bridge --volume=/home/ubuntu/_work/_temp/test/build/clickhouse:/clickhouse --volume=/home/ubuntu/_work/_temp/test/build/clickhouse-library-bridge:/clickhouse-library-bridge --volume=/home/ubuntu/_work/ClickHouse/ClickHouse/programs/server:/clickhouse-config --volume=/home/ubuntu/_work/ClickHouse/ClickHouse/tests/integration:/ClickHouse/tests/integration --volume=/home/ubuntu/_work/ClickHouse/ClickHouse/utils/backupview:/ClickHouse/utils/backupview --volume=/home/ubuntu/_work/ClickHouse/ClickHouse/utils/grpc-client/pb2:/ClickHouse/utils/grpc-client/pb2 --volume=/run:/run/host:ro --volume=clickhouse_integration_tests_volume:/var/lib/docker -e DOCKER_DOTNET_CLIENT_TAG=11de0b29a15d -e DOCKER_HELPER_TAG=5dc43a6382f0 -e DOCKER_BASE_TAG=8b2301119731 -e DOCKER_KERBEROS_KDC_TAG=9391ecdee8d7 -e DOCKER_MYSQL_GOLANG_CLIENT_TAG=9bec2a638e6e -e DOCKER_MYSQL_JAVA_CLIENT_TAG=766bff31cfe4 -e DOCKER_MYSQL_JS_CLIENT_TAG=41ba7c2ec2a1 -e DOCKER_MYSQL_PHP_CLIENT_TAG=88be89c1e3b6 -e DOCKER_NGINX_DAV_TAG=b55ac9cd7519 -e DOCKER_POSTGRESQL_JAVA_CLIENT_TAG=a4eff5c7f4d6 -e DOCKER_PYTHON_BOTTLE_TAG=caad4729259e -e DOCKER_CLIENT_TIMEOUT=300 -e COMPOSE_HTTP_TIMEOUT=600 -e PYTHONUNBUFFERED=1 -e PYTEST_ADDOPTS="--dist=loadfile -n 10 -rfEps --run-id=0 --color=no --durations=0 test_drop_replica_with_auxiliary_zookeepers/test.py::test_drop_replica_in_auxiliary_zookeeper test_encrypted_disk/test.py::test_add_keys test_encrypted_disk/test.py::test_add_keys_with_id 'test_encrypted_disk/test.py::test_backup_restore[File-encrypted_policy-local_policy-False]' 'test_encrypted_disk/test.py::test_backup_restore[File-encrypted_policy-local_policy-True]' 'test_encrypted_disk/test.py::test_backup_restore[File-local_policy-encrypted_policy-False]' 'test_encrypted_disk/test.py::test_backup_restore[File-s3_encrypted_default_path-encrypted_policy-False]' 'test_encrypted_disk/test.py::test_backup_restore[S3-encrypted_policy-encrypted_policy-False]' 'test_encrypted_disk/test.py::test_backup_restore[S3-encrypted_policy-s3_encrypted_default_path-False]' 'test_encrypted_disk/test.py::test_backup_restore[S3-s3_encrypted_default_path-encrypted_policy-False]' 'test_encrypted_disk/test.py::test_backup_restore[S3-s3_encrypted_default_path-s3_encrypted_default_path-False]' 'test_encrypted_disk/test.py::test_encrypted_disk[encrypted_policy]' 'test_encrypted_disk/test.py::test_encrypted_disk[encrypted_policy_key192b]' 'test_encrypted_disk/test.py::test_encrypted_disk[local_policy]' 'test_encrypted_disk/test.py::test_encrypted_disk[s3_policy]' test_encrypted_disk/test.py::test_log_family 'test_encrypted_disk/test.py::test_migration_from_old_version[version_1be]' 'test_encrypted_disk/test.py::test_migration_from_old_version[version_1le]' 'test_encrypted_disk/test.py::test_migration_from_old_version[version_2]' 'test_encrypted_disk/test.py::test_optimize_table[local_policy-disk_local_encrypted]' 'test_encrypted_disk/test.py::test_optimize_table[s3_policy-disk_s3_encrypted]' 'test_encrypted_disk/test.py::test_part_move[local_policy-destination_disks0]' 'test_encrypted_disk/test.py::test_part_move[s3_policy-destination_disks1]' test_encrypted_disk/test.py::test_read_in_order test_encrypted_disk/test.py::test_restart test_explain_estimates/test.py::test_explain_estimates test_external_http_authenticator/test.py::test_basic_auth_failed test_external_http_authenticator/test.py::test_session_settings_from_auth_response test_external_http_authenticator/test.py::test_user_create_basic_auth_pass test_external_http_authenticator/test.py::test_user_from_config_basic_auth_pass 'test_fetch_partition_from_auxiliary_zookeeper/test.py::test_fetch_part_from_allowed_zookeeper[PART-2020-08-28-20200828_0_0_0]' 'test_fetch_partition_from_auxiliary_zookeeper/test.py::test_fetch_part_from_allowed_zookeeper[PARTITION-2020-08-27-2020-08-27]' test_file_cluster/test.py::test_count test_file_cluster/test.py::test_format_detection test_file_cluster/test.py::test_missing_file test_file_cluster/test.py::test_no_such_files test_file_cluster/test.py::test_non_existent_cluster test_file_cluster/test.py::test_schema_inference test_file_cluster/test.py::test_select_all test_format_schema_on_server/test.py::test_drop_cache_protobuf_format test_format_schema_on_server/test.py::test_drop_capn_proto_format test_format_schema_on_server/test.py::test_protobuf_format_input test_format_schema_on_server/test.py::test_protobuf_format_output test_graphite_merge_tree/test.py::test_broken_partial_rollup test_graphite_merge_tree/test.py::test_combined_rules test_graphite_merge_tree/test.py::test_combined_rules_with_default test_graphite_merge_tree/test.py::test_multiple_output_blocks test_graphite_merge_tree/test.py::test_multiple_paths_and_versions test_graphite_merge_tree/test.py::test_path_dangling_pointer test_graphite_merge_tree/test.py::test_paths_not_matching_any_pattern test_graphite_merge_tree/test.py::test_rollup_aggregation test_graphite_merge_tree/test.py::test_rollup_aggregation_2 test_graphite_merge_tree/test.py::test_rollup_versions test_graphite_merge_tree/test.py::test_system_graphite_retentions test_graphite_merge_tree/test.py::test_wrong_rollup_config test_grpc_protocol_ssl/test.py::test_insecure_channel test_grpc_protocol_ssl/test.py::test_secure_channel test_grpc_protocol_ssl/test.py::test_wrong_client_certificate test_hedged_requests_parallel/test.py::test_combination1 test_hedged_requests_parallel/test.py::test_combination2 test_hedged_requests_parallel/test.py::test_query_with_no_data_to_sample test_hedged_requests_parallel/test.py::test_send_data test_hedged_requests_parallel/test.py::test_send_table_status_sleep test_http_and_readonly/test.py::test_http_get_is_readonly test_http_native/test.py::test_http_native_returns_timezone test_input_format_parallel_parsing_memory_tracking/test.py::test_memory_tracking_total 'test_insert_distributed_async_send/test.py::test_insert_distributed_async_send_corrupted_big[0]' 'test_insert_distributed_async_send/test.py::test_insert_distributed_async_send_corrupted_big[1]' 'test_insert_distributed_async_send/test.py::test_insert_distributed_async_send_corrupted_small[0]' 'test_insert_distributed_async_send/test.py::test_insert_distributed_async_send_corrupted_small[1]' 'test_insert_distributed_async_send/test.py::test_insert_distributed_async_send_different_header[0]' 'test_insert_distributed_async_send/test.py::test_insert_distributed_async_send_different_header[1]' 'test_insert_distributed_async_send/test.py::test_insert_distributed_async_send_success[0]' 'test_insert_distributed_async_send/test.py::test_insert_distributed_async_send_success[1]' 'test_insert_distributed_async_send/test.py::test_insert_distributed_async_send_truncated_1[0-0]' 'test_insert_distributed_async_send/test.py::test_insert_distributed_async_send_truncated_1[0-1]' 'test_insert_distributed_async_send/test.py::test_insert_distributed_async_send_truncated_1[1-0]' 'test_insert_distributed_async_send/test.py::test_insert_distributed_async_send_truncated_1[1-1]' 'test_insert_distributed_async_send/test.py::test_insert_distributed_async_send_truncated_2[0]' 'test_insert_distributed_async_send/test.py::test_insert_distributed_async_send_truncated_2[1]' test_jbod_ha/test.py::test_jbod_ha test_jbod_load_balancing/test.py::test_jbod_load_balancing_least_used test_jbod_load_balancing/test.py::test_jbod_load_balancing_least_used_detect_background_changes test_jbod_load_balancing/test.py::test_jbod_load_balancing_least_used_next_disk test_jbod_load_balancing/test.py::test_jbod_load_balancing_round_robin test_keeper_availability_zone/test.py::test_get_availability_zone test_keeper_broken_logs/test.py::test_single_node_broken_log test_keeper_client/test.py::test_base_commands test_keeper_client/test.py::test_big_family test_keeper_client/test.py::test_delete_stale_backups test_keeper_client/test.py::test_find_super_nodes test_keeper_client/test.py::test_four_letter_word_commands test_keeper_client/test.py::test_get_all_children_number test_keeper_client/test.py::test_quoted_argument_parsing test_keeper_client/test.py::test_rm_with_version test_keeper_client/test.py::test_rm_without_version test_keeper_client/test.py::test_set_with_version test_keeper_client/test.py::test_set_without_version test_keeper_incorrect_config/test.py::test_invalid_configs test_keeper_memory_soft_limit/test.py::test_soft_limit_create -vvv -ss" altinityinfra/integration-tests-runner:2165613c5fcd '. Start tests ============================= test session starts ============================== platform linux -- Python 3.10.12, pytest-7.4.4, pluggy-1.5.0 -- /usr/bin/python3 cachedir: .pytest_cache Test order randomisation NOT enabled. Enable with --random-order or --random-order-bucket= rootdir: /ClickHouse/tests/integration configfile: pytest.ini plugins: timeout-2.3.1, repeat-0.9.3, order-1.0.0, reportlog-0.4.0, xdist-3.5.0, random-order-1.1.1 timeout: 900.0s timeout method: signal timeout func_only: False created: 10/10 workers 10 workers [100 items] scheduling tests via LoadFileScheduling test_encrypted_disk/test.py::test_add_keys Command:[docker ps | wc -l] Command:[docker ps | wc -l] Command:[docker ps | wc -l] Command:[docker ps | wc -l] Command:[docker ps | wc -l] Command:[docker ps | wc -l] Command:[docker ps | wc -l] Command:[docker ps | wc -l] test_insert_distributed_async_send/test.py::test_insert_distributed_async_send_corrupted_big[0] test_format_schema_on_server/test.py::test_drop_cache_protobuf_format test_grpc_protocol_ssl/test.py::test_insecure_channel Command:[docker ps | wc -l] Command:[docker ps | wc -l] test_file_cluster/test.py::test_count test_external_http_authenticator/test.py::test_basic_auth_failed test_hedged_requests_parallel/test.py::test_combination1 test_jbod_load_balancing/test.py::test_jbod_load_balancing_least_used test_keeper_client/test.py::test_base_commands test_graphite_merge_tree/test.py::test_broken_partial_rollup Stdout:1 Stdout:1 Stdout:1 Stdout:1 No running containers Pruning Docker networks No running containers No running containers No running containers Command:[docker network prune --force] Pruning Docker networks Pruning Docker networks Pruning Docker networks Command:[docker network prune --force] Command:[docker network prune --force] Command:[docker network prune --force] Stdout:1 No running containers Pruning Docker networks Command:[docker network prune --force] Stdout:1 No running containers Stdout:1 Pruning Docker networks No running containers Command:[docker network prune --force] Pruning Docker networks Command:[docker network prune --force] Stdout:1 No running containers Pruning Docker networks Command:[docker network prune --force] Stdout:1 No running containers Pruning Docker networks Command:[docker network prune --force] Stdout:1 No running containers Pruning Docker networks Command:[docker network prune --force] Command:[sysctl net.ipv4.ip_local_port_range='55000 65535'] Command:[sysctl net.ipv4.ip_local_port_range='55000 65535'] Command:[sysctl net.ipv4.ip_local_port_range='55000 65535'] Command:[sysctl net.ipv4.ip_local_port_range='55000 65535'] Command:[sysctl net.ipv4.ip_local_port_range='55000 65535'] Command:[sysctl net.ipv4.ip_local_port_range='55000 65535'] Stdout:net.ipv4.ip_local_port_range = 55000 65535 Running tests in /ClickHouse/tests/integration/test_graphite_merge_tree/test.py Cluster start called. is_up=False Stdout:net.ipv4.ip_local_port_range = 55000 65535 Command:[sysctl net.ipv4.ip_local_port_range='55000 65535'] ENV DOCKER_KERBEROS_KDC_TAG 9391ecdee8d7 ENV CLICKHOUSE_TESTS_SERVER_BIN_PATH /clickhouse Stdout:net.ipv4.ip_local_port_range = 55000 65535 Stdout:net.ipv4.ip_local_port_range = 55000 65535 ENV MSAN_OPTIONS abort_on_error=1 poison_in_dtor=1 Running tests in /ClickHouse/tests/integration/test_grpc_protocol_ssl/test.py ENV JAVA_TOOL_OPTIONS -Djdk.attach.allowAttachSelf=true Running tests in /ClickHouse/tests/integration/test_external_http_authenticator/test.py Stdout:net.ipv4.ip_local_port_range = 55000 65535 ENV TSAN_OPTIONS halt_on_error=1 abort_on_error=1 history_size=7 memory_limit_mb=46080 second_deadlock_stack=1 Command:[sysctl net.ipv4.ip_local_port_range='55000 65535'] Cluster start called. is_up=False ENV HOSTNAME a81bd9f02124 Cluster start called. is_up=False ENV SHLVL 0 ENV HOME /root ENV OLDPWD / ENV DOCKER_HELPER_TAG 5dc43a6382f0 ENV PYTHONUNBUFFERED 1 Running tests in /ClickHouse/tests/integration/test_keeper_client/test.py ENV DOCKER_PYTHON_BOTTLE_TAG caad4729259e ENV UBSAN_OPTIONS print_stacktrace=1 ENV PYTEST_ADDOPTS --dist=loadfile -n 10 -rfEps --run-id=0 --color=no --durations=0 test_drop_replica_with_auxiliary_zookeepers/test.py::test_drop_replica_in_auxiliary_zookeeper test_encrypted_disk/test.py::test_add_keys test_encrypted_disk/test.py::test_add_keys_with_id 'test_encrypted_disk/test.py::test_backup_restore[File-encrypted_policy-local_policy-False]' 'test_encrypted_disk/test.py::test_backup_restore[File-encrypted_policy-local_policy-True]' 'test_encrypted_disk/test.py::test_backup_restore[File-local_policy-encrypted_policy-False]' 'test_encrypted_disk/test.py::test_backup_restore[File-s3_encrypted_default_path-encrypted_policy-False]' 'test_encrypted_disk/test.py::test_backup_restore[S3-encrypted_policy-encrypted_policy-False]' 'test_encrypted_disk/test.py::test_backup_restore[S3-encrypted_policy-s3_encrypted_default_path-False]' 'test_encrypted_disk/test.py::test_backup_restore[S3-s3_encrypted_default_path-encrypted_policy-False]' 'test_encrypted_disk/test.py::test_backup_restore[S3-s3_encrypted_default_path-s3_encrypted_default_path-False]' 'test_encrypted_disk/test.py::test_encrypted_disk[encrypted_policy]' 'test_encrypted_disk/test.py::test_encrypted_disk[encrypted_policy_key192b]' 'test_encrypted_disk/test.py::test_encrypted_disk[local_policy]' 'test_encrypted_disk/test.py::test_encrypted_disk[s3_policy]' test_encrypted_disk/test.py::test_log_family 'test_encrypted_disk/test.py::test_migration_from_old_version[version_1be]' 'test_encrypted_disk/test.py::test_migration_from_old_version[version_1le]' 'test_encrypted_disk/test.py::test_migration_from_old_version[version_2]' 'test_encrypted_disk/test.py::test_optimize_table[local_policy-disk_local_encrypted]' 'test_encrypted_disk/test.py::test_optimize_table[s3_policy-disk_s3_encrypted]' 'test_encrypted_disk/test.py::test_part_move[local_policy-destination_disks0]' 'test_encrypted_disk/test.py::test_part_move[s3_policy-destination_disks1]' test_encrypted_disk/test.py::test_read_in_order test_encrypted_disk/test.py::test_restart test_explain_estimates/test.py::test_explain_estimates test_external_http_authenticator/test.py::test_basic_auth_failed test_external_http_authenticator/test.py::test_session_settings_from_auth_response test_external_http_authenticator/test.py::test_user_create_basic_auth_pass test_external_http_authenticator/test.py::test_user_from_config_basic_auth_pass 'test_fetch_partition_from_auxiliary_zookeeper/test.py::test_fetch_part_from_allowed_zookeeper[PART-2020-08-28-20200828_0_0_0]' 'test_fetch_partition_from_auxiliary_zookeeper/test.py::test_fetch_part_from_allowed_zookeeper[PARTITION-2020-08-27-2020-08-27]' test_file_cluster/test.py::test_count test_file_cluster/test.py::test_format_detection test_file_cluster/test.py::test_missing_file test_file_cluster/test.py::test_no_such_files test_file_cluster/test.py::test_non_existent_cluster test_file_cluster/test.py::test_schema_inference test_file_cluster/test.py::test_select_all test_format_schema_on_server/test.py::test_drop_cache_protobuf_format test_format_schema_on_server/test.py::test_drop_capn_proto_format test_format_schema_on_server/test.py::test_protobuf_format_input test_format_schema_on_server/test.py::test_protobuf_format_output test_graphite_merge_tree/test.py::test_broken_partial_rollup test_graphite_merge_tree/test.py::test_combined_rules test_graphite_merge_tree/test.py::test_combined_rules_with_default test_graphite_merge_tree/test.py::test_multiple_output_blocks test_graphite_merge_tree/test.py::test_multiple_paths_and_versions test_graphite_merge_tree/test.py::test_path_dangling_pointer test_graphite_merge_tree/test.py::test_paths_not_matching_any_pattern test_graphite_merge_tree/test.py::test_rollup_aggregation test_graphite_merge_tree/test.py::test_rollup_aggregation_2 test_graphite_merge_tree/test.py::test_rollup_versions test_graphite_merge_tree/test.py::test_system_graphite_retentions test_graphite_merge_tree/test.py::test_wrong_rollup_config test_grpc_protocol_ssl/test.py::test_insecure_channel test_grpc_protocol_ssl/test.py::test_secure_channel test_grpc_protocol_ssl/test.py::test_wrong_client_certificate test_hedged_requests_parallel/test.py::test_combination1 test_hedged_requests_parallel/test.py::test_combination2 test_hedged_requests_parallel/test.py::test_query_with_no_data_to_sample test_hedged_requests_parallel/test.py::test_send_data test_hedged_requests_parallel/test.py::test_send_table_status_sleep test_http_and_readonly/test.py::test_http_get_is_readonly test_http_native/test.py::test_http_native_returns_timezone test_input_format_parallel_parsing_memory_tracking/test.py::test_memory_tracking_total 'test_insert_distributed_async_send/test.py::test_insert_distributed_async_send_corrupted_big[0]' 'test_insert_distributed_async_send/test.py::test_insert_distributed_async_send_corrupted_big[1]' 'test_insert_distributed_async_send/test.py::test_insert_distributed_async_send_corrupted_small[0]' 'test_insert_distributed_async_send/test.py::test_insert_distributed_async_send_corrupted_small[1]' 'test_insert_distributed_async_send/test.py::test_insert_distributed_async_send_different_header[0]' 'test_insert_distributed_async_send/test.py::test_insert_distributed_async_send_different_header[1]' 'test_insert_distributed_async_send/test.py::test_insert_distributed_async_send_success[0]' 'test_insert_distributed_async_send/test.py::test_insert_distributed_async_send_success[1]' 'test_insert_distributed_async_send/test.py::test_insert_distributed_async_send_truncated_1[0-0]' 'test_insert_distributed_async_send/test.py::test_insert_distributed_async_send_truncated_1[0-1]' 'test_insert_distributed_async_send/test.py::test_insert_distributed_async_send_truncated_1[1-0]' 'test_insert_distributed_async_send/test.py::test_insert_distributed_async_send_truncated_1[1-1]' 'test_insert_distributed_async_send/test.py::test_insert_distributed_async_send_truncated_2[0]' 'test_insert_distributed_async_send/test.py::test_insert_distributed_async_send_truncated_2[1]' test_jbod_ha/test.py::test_jbod_ha test_jbod_load_balancing/test.py::test_jbod_load_balancing_least_used test_jbod_load_balancing/test.py::test_jbod_load_balancing_least_used_detect_background_changes test_jbod_load_balancing/test.py::test_jbod_load_balancing_least_used_next_disk test_jbod_load_balancing/test.py::test_jbod_load_balancing_round_robin test_keeper_availability_zone/test.py::test_get_availability_zone test_keeper_broken_logs/test.py::test_single_node_broken_log test_keeper_client/test.py::test_base_commands test_keeper_client/test.py::test_big_family test_keeper_client/test.py::test_delete_stale_backups test_keeper_client/test.py::test_find_super_nodes test_keeper_client/test.py::test_four_letter_word_commands test_keeper_client/test.py::test_get_all_children_number test_keeper_client/test.py::test_quoted_argument_parsing test_keeper_client/test.py::test_rm_with_version test_keeper_client/test.py::test_rm_without_version test_keeper_client/test.py::test_set_with_version test_keeper_client/test.py::test_set_without_version test_keeper_incorrect_config/test.py::test_invalid_configs test_keeper_memory_soft_limit/test.py::test_soft_limit_create -vvv -ss Cluster start called. is_up=False ENV CLICKHOUSE_LIBRARY_BRIDGE_BINARY_PATH /clickhouse-library-bridge ENV COMPOSE_HTTP_TIMEOUT 600 ENV DOCKER_MYSQL_PHP_CLIENT_TAG 88be89c1e3b6 ENV DOCKER_DOTNET_CLIENT_TAG 11de0b29a15d ENV CLICKHOUSE_TESTS_CLIENT_BIN_PATH /clickhouse Stdout:net.ipv4.ip_local_port_range = 55000 65535 Command:[sysctl net.ipv4.ip_local_port_range='55000 65535'] Stdout:net.ipv4.ip_local_port_range = 55000 65535 ENV DOCKER_MYSQL_JS_CLIENT_TAG 41ba7c2ec2a1 ENV PATH /spark-3.3.2-bin-hadoop3/bin:/opt/gdb/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin ENV DOCKER_KERBERIZED_HADOOP_TAG latest ENV DOCKER_CHANNEL stable ENV DOCKER_CLIENT_TIMEOUT 300 ENV DOCKER_POSTGRESQL_JAVA_CLIENT_TAG a4eff5c7f4d6 ENV DOCKER_NGINX_DAV_TAG b55ac9cd7519 ENV DOCKER_MYSQL_GOLANG_CLIENT_TAG 9bec2a638e6e ENV PWD /ClickHouse/tests/integration ENV DOCKER_KERBEROS_KDC_TAG 9391ecdee8d7 ENV DOCKER_MYSQL_JAVA_CLIENT_TAG 766bff31cfe4 ENV CLICKHOUSE_ODBC_BRIDGE_BINARY_PATH /clickhouse-odbc-bridge ENV CLICKHOUSE_TESTS_BASE_CONFIG_DIR /clickhouse-config ENV CLICKHOUSE_TESTS_SERVER_BIN_PATH /clickhouse ENV TZ Etc/UTC Running tests in /ClickHouse/tests/integration/test_encrypted_disk/test.py Command:[sysctl net.ipv4.ip_local_port_range='55000 65535'] ENV JAVA_PATH /usr/lib/jvm/java-11-openjdk-amd64/bin/java ENV MSAN_OPTIONS abort_on_error=1 poison_in_dtor=1 ENV DOCKER_BASE_TAG 8b2301119731 ENV SPARK_HOME /spark-3.3.2-bin-hadoop3 ENV JAVA_TOOL_OPTIONS -Djdk.attach.allowAttachSelf=true ENV LC_CTYPE C.UTF-8 ENV TSAN_OPTIONS halt_on_error=1 abort_on_error=1 history_size=7 memory_limit_mb=46080 second_deadlock_stack=1 ENV INTEGRATION_TESTS_RUN_ID 0 ENV WORKER_FREE_PORTS 30100 30101 30102 30103 30104 30105 30106 30107 30108 30109 30110 30111 30112 30113 30114 30115 30116 30117 30118 30119 30120 30121 30122 30123 30124 30125 30126 30127 30128 30129 30130 30131 30132 30133 30134 30135 30136 30137 30138 30139 30140 30141 30142 30143 30144 30145 30146 30147 30148 30149 Cluster start called. is_up=False ENV HOSTNAME a81bd9f02124 ENV SHLVL 0 ENV PYTEST_XDIST_TESTRUNUID beff948a34514e359894e90157315f20 ENV HOME /root ENV PYTEST_XDIST_WORKER gw2 ENV OLDPWD / ENV PYTEST_XDIST_WORKER_COUNT 10 ENV PYTEST_CURRENT_TEST test_file_cluster/test.py::test_count (setup) ENV DOCKER_HELPER_TAG 5dc43a6382f0 ENV PYTHONUNBUFFERED 1 ENV DOCKER_PYTHON_BOTTLE_TAG caad4729259e CLUSTER INIT base_config_dir:/clickhouse-config clickhouse_start_command: clickhouse server --config-file=/etc/clickhouse-server/{main_config_file} --log-file=/var/log/clickhouse-server/clickhouse-server.log --errorlog-file=/var/log/clickhouse-server/clickhouse-server.err.log ENV UBSAN_OPTIONS print_stacktrace=1 ENV PYTEST_ADDOPTS --dist=loadfile -n 10 -rfEps --run-id=0 --color=no --durations=0 test_drop_replica_with_auxiliary_zookeepers/test.py::test_drop_replica_in_auxiliary_zookeeper test_encrypted_disk/test.py::test_add_keys test_encrypted_disk/test.py::test_add_keys_with_id 'test_encrypted_disk/test.py::test_backup_restore[File-encrypted_policy-local_policy-False]' 'test_encrypted_disk/test.py::test_backup_restore[File-encrypted_policy-local_policy-True]' 'test_encrypted_disk/test.py::test_backup_restore[File-local_policy-encrypted_policy-False]' 'test_encrypted_disk/test.py::test_backup_restore[File-s3_encrypted_default_path-encrypted_policy-False]' 'test_encrypted_disk/test.py::test_backup_restore[S3-encrypted_policy-encrypted_policy-False]' 'test_encrypted_disk/test.py::test_backup_restore[S3-encrypted_policy-s3_encrypted_default_path-False]' 'test_encrypted_disk/test.py::test_backup_restore[S3-s3_encrypted_default_path-encrypted_policy-False]' 'test_encrypted_disk/test.py::test_backup_restore[S3-s3_encrypted_default_path-s3_encrypted_default_path-False]' 'test_encrypted_disk/test.py::test_encrypted_disk[encrypted_policy]' 'test_encrypted_disk/test.py::test_encrypted_disk[encrypted_policy_key192b]' 'test_encrypted_disk/test.py::test_encrypted_disk[local_policy]' 'test_encrypted_disk/test.py::test_encrypted_disk[s3_policy]' test_encrypted_disk/test.py::test_log_family 'test_encrypted_disk/test.py::test_migration_from_old_version[version_1be]' 'test_encrypted_disk/test.py::test_migration_from_old_version[version_1le]' 'test_encrypted_disk/test.py::test_migration_from_old_version[version_2]' 'test_encrypted_disk/test.py::test_optimize_table[local_policy-disk_local_encrypted]' 'test_encrypted_disk/test.py::test_optimize_table[s3_policy-disk_s3_encrypted]' 'test_encrypted_disk/test.py::test_part_move[local_policy-destination_disks0]' 'test_encrypted_disk/test.py::test_part_move[s3_policy-destination_disks1]' test_encrypted_disk/test.py::test_read_in_order test_encrypted_disk/test.py::test_restart test_explain_estimates/test.py::test_explain_estimates test_external_http_authenticator/test.py::test_basic_auth_failed test_external_http_authenticator/test.py::test_session_settings_from_auth_response test_external_http_authenticator/test.py::test_user_create_basic_auth_pass test_external_http_authenticator/test.py::test_user_from_config_basic_auth_pass 'test_fetch_partition_from_auxiliary_zookeeper/test.py::test_fetch_part_from_allowed_zookeeper[PART-2020-08-28-20200828_0_0_0]' 'test_fetch_partition_from_auxiliary_zookeeper/test.py::test_fetch_part_from_allowed_zookeeper[PARTITION-2020-08-27-2020-08-27]' test_file_cluster/test.py::test_count test_file_cluster/test.py::test_format_detection test_file_cluster/test.py::test_missing_file test_file_cluster/test.py::test_no_such_files test_file_cluster/test.py::test_non_existent_cluster test_file_cluster/test.py::test_schema_inference test_file_cluster/test.py::test_select_all test_format_schema_on_server/test.py::test_drop_cache_protobuf_format test_format_schema_on_server/test.py::test_drop_capn_proto_format test_format_schema_on_server/test.py::test_protobuf_format_input test_format_schema_on_server/test.py::test_protobuf_format_output test_graphite_merge_tree/test.py::test_broken_partial_rollup test_graphite_merge_tree/test.py::test_combined_rules test_graphite_merge_tree/test.py::test_combined_rules_with_default test_graphite_merge_tree/test.py::test_multiple_output_blocks test_graphite_merge_tree/test.py::test_multiple_paths_and_versions test_graphite_merge_tree/test.py::test_path_dangling_pointer test_graphite_merge_tree/test.py::test_paths_not_matching_any_pattern test_graphite_merge_tree/test.py::test_rollup_aggregation test_graphite_merge_tree/test.py::test_rollup_aggregation_2 test_graphite_merge_tree/test.py::test_rollup_versions test_graphite_merge_tree/test.py::test_system_graphite_retentions test_graphite_merge_tree/test.py::test_wrong_rollup_config test_grpc_protocol_ssl/test.py::test_insecure_channel test_grpc_protocol_ssl/test.py::test_secure_channel test_grpc_protocol_ssl/test.py::test_wrong_client_certificate test_hedged_requests_parallel/test.py::test_combination1 test_hedged_requests_parallel/test.py::test_combination2 test_hedged_requests_parallel/test.py::test_query_with_no_data_to_sample test_hedged_requests_parallel/test.py::test_send_data test_hedged_requests_parallel/test.py::test_send_table_status_sleep test_http_and_readonly/test.py::test_http_get_is_readonly test_http_native/test.py::test_http_native_returns_timezone test_input_format_parallel_parsing_memory_tracking/test.py::test_memory_tracking_total 'test_insert_distributed_async_send/test.py::test_insert_distributed_async_send_corrupted_big[0]' 'test_insert_distributed_async_send/test.py::test_insert_distributed_async_send_corrupted_big[1]' 'test_insert_distributed_async_send/test.py::test_insert_distributed_async_send_corrupted_small[0]' 'test_insert_distributed_async_send/test.py::test_insert_distributed_async_send_corrupted_small[1]' 'test_insert_distributed_async_send/test.py::test_insert_distributed_async_send_different_header[0]' 'test_insert_distributed_async_send/test.py::test_insert_distributed_async_send_different_header[1]' 'test_insert_distributed_async_send/test.py::test_insert_distributed_async_send_success[0]' 'test_insert_distributed_async_send/test.py::test_insert_distributed_async_send_success[1]' 'test_insert_distributed_async_send/test.py::test_insert_distributed_async_send_truncated_1[0-0]' 'test_insert_distributed_async_send/test.py::test_insert_distributed_async_send_truncated_1[0-1]' 'test_insert_distributed_async_send/test.py::test_insert_distributed_async_send_truncated_1[1-0]' 'test_insert_distributed_async_send/test.py::test_insert_distributed_async_send_truncated_1[1-1]' 'test_insert_distributed_async_send/test.py::test_insert_distributed_async_send_truncated_2[0]' 'test_insert_distributed_async_send/test.py::test_insert_distributed_async_send_truncated_2[1]' test_jbod_ha/test.py::test_jbod_ha test_jbod_load_balancing/test.py::test_jbod_load_balancing_least_used test_jbod_load_balancing/test.py::test_jbod_load_balancing_least_used_detect_background_changes test_jbod_load_balancing/test.py::test_jbod_load_balancing_least_used_next_disk test_jbod_load_balancing/test.py::test_jbod_load_balancing_round_robin test_keeper_availability_zone/test.py::test_get_availability_zone test_keeper_broken_logs/test.py::test_single_node_broken_log test_keeper_client/test.py::test_base_commands test_keeper_client/test.py::test_big_family test_keeper_client/test.py::test_delete_stale_backups test_keeper_client/test.py::test_find_super_nodes test_keeper_client/test.py::test_four_letter_word_commands test_keeper_client/test.py::test_get_all_children_number test_keeper_client/test.py::test_quoted_argument_parsing test_keeper_client/test.py::test_rm_with_version test_keeper_client/test.py::test_rm_without_version test_keeper_client/test.py::test_set_with_version test_keeper_client/test.py::test_set_without_version test_keeper_incorrect_config/test.py::test_invalid_configs test_keeper_memory_soft_limit/test.py::test_soft_limit_create -vvv -ss ENV CLICKHOUSE_LIBRARY_BRIDGE_BINARY_PATH /clickhouse-library-bridge Stdout:net.ipv4.ip_local_port_range = 55000 65535 ENV COMPOSE_HTTP_TIMEOUT 600 Setup Keeper ENV DOCKER_MYSQL_PHP_CLIENT_TAG 88be89c1e3b6 ENV DOCKER_DOTNET_CLIENT_TAG 11de0b29a15d Cluster name: project_name:roottestfilecluster-gw2. Added instance name:s0_0_0 tag:8b2301119731 base_cmd:['docker', 'compose', '--env-file', '/ClickHouse/tests/integration/test_file_cluster/_instances-0-gw2/.env', '--project-name', 'roottestfilecluster-gw2', '--file', '/ClickHouse/tests/integration/test_file_cluster/_instances-0-gw2/s0_0_0/docker-compose.yml', '--file', '/ClickHouse/tests/integration/helpers/../../../tests/integration/compose/docker_compose_keeper.yml'] docker_compose_yml_dir:/ClickHouse/tests/integration/helpers/../../../tests/integration/compose/ ENV CLICKHOUSE_TESTS_CLIENT_BIN_PATH /clickhouse clickhouse_start_command: clickhouse server --config-file=/etc/clickhouse-server/{main_config_file} --log-file=/var/log/clickhouse-server/clickhouse-server.log --errorlog-file=/var/log/clickhouse-server/clickhouse-server.err.log ENV DOCKER_MYSQL_JS_CLIENT_TAG 41ba7c2ec2a1 Stdout:net.ipv4.ip_local_port_range = 55000 65535 Cluster name: project_name:roottestfilecluster-gw2. Added instance name:s0_0_1 tag:8b2301119731 base_cmd:['docker', 'compose', '--env-file', '/ClickHouse/tests/integration/test_file_cluster/_instances-0-gw2/.env', '--project-name', 'roottestfilecluster-gw2', '--file', '/ClickHouse/tests/integration/test_file_cluster/_instances-0-gw2/s0_0_0/docker-compose.yml', '--file', '/ClickHouse/tests/integration/helpers/../../../tests/integration/compose/docker_compose_keeper.yml', '--file', '/ClickHouse/tests/integration/test_file_cluster/_instances-0-gw2/s0_0_1/docker-compose.yml'] docker_compose_yml_dir:/ClickHouse/tests/integration/helpers/../../../tests/integration/compose/ ENV PATH /spark-3.3.2-bin-hadoop3/bin:/opt/gdb/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin ENV DOCKER_KERBERIZED_HADOOP_TAG latest Running tests in /ClickHouse/tests/integration/test_jbod_load_balancing/test.py Stdout:net.ipv4.ip_local_port_range = 55000 65535 ENV DOCKER_CHANNEL stable clickhouse_start_command: clickhouse server --config-file=/etc/clickhouse-server/{main_config_file} --log-file=/var/log/clickhouse-server/clickhouse-server.log --errorlog-file=/var/log/clickhouse-server/clickhouse-server.err.log ENV DOCKER_CLIENT_TIMEOUT 300 Cluster name: project_name:roottestfilecluster-gw2. Added instance name:s0_1_0 tag:8b2301119731 base_cmd:['docker', 'compose', '--env-file', '/ClickHouse/tests/integration/test_file_cluster/_instances-0-gw2/.env', '--project-name', 'roottestfilecluster-gw2', '--file', '/ClickHouse/tests/integration/test_file_cluster/_instances-0-gw2/s0_0_0/docker-compose.yml', '--file', '/ClickHouse/tests/integration/helpers/../../../tests/integration/compose/docker_compose_keeper.yml', '--file', '/ClickHouse/tests/integration/test_file_cluster/_instances-0-gw2/s0_0_1/docker-compose.yml', '--file', '/ClickHouse/tests/integration/test_file_cluster/_instances-0-gw2/s0_1_0/docker-compose.yml'] docker_compose_yml_dir:/ClickHouse/tests/integration/helpers/../../../tests/integration/compose/ ENV DOCKER_POSTGRESQL_JAVA_CLIENT_TAG a4eff5c7f4d6 Cluster start called. is_up=False Starting cluster... ENV DOCKER_NGINX_DAV_TAG b55ac9cd7519 ENV DOCKER_MYSQL_GOLANG_CLIENT_TAG 9bec2a638e6e ENV PWD /ClickHouse/tests/integration Running tests in /ClickHouse/tests/integration/test_insert_distributed_async_send/test.py Running tests in /ClickHouse/tests/integration/test_file_cluster/test.py ENV DOCKER_MYSQL_JAVA_CLIENT_TAG 766bff31cfe4 ENV CLICKHOUSE_ODBC_BRIDGE_BINARY_PATH /clickhouse-odbc-bridge Cluster start called. is_up=False Running tests in /ClickHouse/tests/integration/test_format_schema_on_server/test.py Cluster start called. is_up=False ENV CLICKHOUSE_TESTS_BASE_CONFIG_DIR /clickhouse-config ENV TZ Etc/UTC Cluster start called. is_up=False ENV JAVA_PATH /usr/lib/jvm/java-11-openjdk-amd64/bin/java ENV DOCKER_BASE_TAG 8b2301119731 ENV SPARK_HOME /spark-3.3.2-bin-hadoop3 ENV LC_CTYPE C.UTF-8 ENV INTEGRATION_TESTS_RUN_ID 0 ENV WORKER_FREE_PORTS 30150 30151 30152 30153 30154 30155 30156 30157 30158 30159 30160 30161 30162 30163 30164 30165 30166 30167 30168 30169 30170 30171 30172 30173 30174 30175 30176 30177 30178 30179 30180 30181 30182 30183 30184 30185 30186 30187 30188 30189 30190 30191 30192 30193 30194 30195 30196 30197 30198 30199 ENV PYTEST_XDIST_TESTRUNUID beff948a34514e359894e90157315f20 ENV PYTEST_XDIST_WORKER gw3 ENV PYTEST_XDIST_WORKER_COUNT 10 ENV PYTEST_CURRENT_TEST test_hedged_requests_parallel/test.py::test_combination1 (setup) CLUSTER INIT base_config_dir:/clickhouse-config clickhouse_start_command: clickhouse server --config-file=/etc/clickhouse-server/{main_config_file} --log-file=/var/log/clickhouse-server/clickhouse-server.log --errorlog-file=/var/log/clickhouse-server/clickhouse-server.err.log Cluster name: project_name:roottesthedgedrequestsparallel-gw3. Added instance name:node tag:8b2301119731 base_cmd:['docker', 'compose', '--env-file', '/ClickHouse/tests/integration/test_hedged_requests_parallel/_instances-0-gw3/.env', '--project-name', 'roottesthedgedrequestsparallel-gw3', '--file', '/ClickHouse/tests/integration/test_hedged_requests_parallel/_instances-0-gw3/node/docker-compose.yml'] docker_compose_yml_dir:/ClickHouse/tests/integration/helpers/../../../tests/integration/compose/ clickhouse_start_command: clickhouse server --config-file=/etc/clickhouse-server/{main_config_file} --log-file=/var/log/clickhouse-server/clickhouse-server.log --errorlog-file=/var/log/clickhouse-server/clickhouse-server.err.log Cluster name: project_name:roottesthedgedrequestsparallel-gw3. Added instance name:node_1 tag:8b2301119731 base_cmd:['docker', 'compose', '--env-file', '/ClickHouse/tests/integration/test_hedged_requests_parallel/_instances-0-gw3/.env', '--project-name', 'roottesthedgedrequestsparallel-gw3', '--file', '/ClickHouse/tests/integration/test_hedged_requests_parallel/_instances-0-gw3/node/docker-compose.yml', '--file', '/ClickHouse/tests/integration/test_hedged_requests_parallel/_instances-0-gw3/node_1/docker-compose.yml'] docker_compose_yml_dir:/ClickHouse/tests/integration/helpers/../../../tests/integration/compose/ clickhouse_start_command: clickhouse server --config-file=/etc/clickhouse-server/{main_config_file} --log-file=/var/log/clickhouse-server/clickhouse-server.log --errorlog-file=/var/log/clickhouse-server/clickhouse-server.err.log Cluster name: project_name:roottesthedgedrequestsparallel-gw3. Added instance name:node_2 tag:8b2301119731 base_cmd:['docker', 'compose', '--env-file', '/ClickHouse/tests/integration/test_hedged_requests_parallel/_instances-0-gw3/.env', '--project-name', 'roottesthedgedrequestsparallel-gw3', '--file', '/ClickHouse/tests/integration/test_hedged_requests_parallel/_instances-0-gw3/node/docker-compose.yml', '--file', '/ClickHouse/tests/integration/test_hedged_requests_parallel/_instances-0-gw3/node_1/docker-compose.yml', '--file', '/ClickHouse/tests/integration/test_hedged_requests_parallel/_instances-0-gw3/node_2/docker-compose.yml'] docker_compose_yml_dir:/ClickHouse/tests/integration/helpers/../../../tests/integration/compose/ clickhouse_start_command: clickhouse server --config-file=/etc/clickhouse-server/{main_config_file} --log-file=/var/log/clickhouse-server/clickhouse-server.log --errorlog-file=/var/log/clickhouse-server/clickhouse-server.err.log Cluster name: project_name:roottesthedgedrequestsparallel-gw3. Added instance name:node_3 tag:8b2301119731 base_cmd:['docker', 'compose', '--env-file', '/ClickHouse/tests/integration/test_hedged_requests_parallel/_instances-0-gw3/.env', '--project-name', 'roottesthedgedrequestsparallel-gw3', '--file', '/ClickHouse/tests/integration/test_hedged_requests_parallel/_instances-0-gw3/node/docker-compose.yml', '--file', '/ClickHouse/tests/integration/test_hedged_requests_parallel/_instances-0-gw3/node_1/docker-compose.yml', '--file', '/ClickHouse/tests/integration/test_hedged_requests_parallel/_instances-0-gw3/node_2/docker-compose.yml', '--file', '/ClickHouse/tests/integration/test_hedged_requests_parallel/_instances-0-gw3/node_3/docker-compose.yml'] docker_compose_yml_dir:/ClickHouse/tests/integration/helpers/../../../tests/integration/compose/ clickhouse_start_command: clickhouse server --config-file=/etc/clickhouse-server/{main_config_file} --log-file=/var/log/clickhouse-server/clickhouse-server.log --errorlog-file=/var/log/clickhouse-server/clickhouse-server.err.log Cluster name: project_name:roottesthedgedrequestsparallel-gw3. Added instance name:node_4 tag:8b2301119731 base_cmd:['docker', 'compose', '--env-file', '/ClickHouse/tests/integration/test_hedged_requests_parallel/_instances-0-gw3/.env', '--project-name', 'roottesthedgedrequestsparallel-gw3', '--file', '/ClickHouse/tests/integration/test_hedged_requests_parallel/_instances-0-gw3/node/docker-compose.yml', '--file', '/ClickHouse/tests/integration/test_hedged_requests_parallel/_instances-0-gw3/node_1/docker-compose.yml', '--file', '/ClickHouse/tests/integration/test_hedged_requests_parallel/_instances-0-gw3/node_2/docker-compose.yml', '--file', '/ClickHouse/tests/integration/test_hedged_requests_parallel/_instances-0-gw3/node_3/docker-compose.yml', '--file', '/ClickHouse/tests/integration/test_hedged_requests_parallel/_instances-0-gw3/node_4/docker-compose.yml'] docker_compose_yml_dir:/ClickHouse/tests/integration/helpers/../../../tests/integration/compose/ Running tests in /ClickHouse/tests/integration/test_hedged_requests_parallel/test.py Cluster start called. is_up=False Docker networks for project roottestkeeperclient-gw4 are NETWORK ID NAME DRIVER SCOPE Docker networks for project roottestgrpcprotocolssl-gw9 are NETWORK ID NAME DRIVER SCOPE Docker networks for project roottestgraphitemergetree-gw5 are NETWORK ID NAME DRIVER SCOPE Docker networks for project roottestencrypteddisk-gw1 are NETWORK ID NAME DRIVER SCOPE Docker networks for project roottestjbodloadbalancing-gw8 are NETWORK ID NAME DRIVER SCOPE Docker networks for project roottestexternalhttpauthenticator-gw7 are NETWORK ID NAME DRIVER SCOPE Docker networks for project roottestfilecluster-gw2 are NETWORK ID NAME DRIVER SCOPE Docker networks for project roottestinsertdistributedasyncsend-gw0 are NETWORK ID NAME DRIVER SCOPE Docker networks for project roottestformatschemaonserver-gw6 are NETWORK ID NAME DRIVER SCOPE Docker containers for project roottestkeeperclient-gw4 are CONTAINER ID IMAGE COMMAND CREATED STATUS PORTS NAMES Docker containers for project roottestgraphitemergetree-gw5 are CONTAINER ID IMAGE COMMAND CREATED STATUS PORTS NAMES Docker networks for project roottesthedgedrequestsparallel-gw3 are NETWORK ID NAME DRIVER SCOPE Docker containers for project roottestgrpcprotocolssl-gw9 are CONTAINER ID IMAGE COMMAND CREATED STATUS PORTS NAMES Docker containers for project roottestjbodloadbalancing-gw8 are CONTAINER ID IMAGE COMMAND CREATED STATUS PORTS NAMES Docker containers for project roottestformatschemaonserver-gw6 are CONTAINER ID IMAGE COMMAND CREATED STATUS PORTS NAMES Docker containers for project roottestfilecluster-gw2 are CONTAINER ID IMAGE COMMAND CREATED STATUS PORTS NAMES Docker containers for project roottestinsertdistributedasyncsend-gw0 are CONTAINER ID IMAGE COMMAND CREATED STATUS PORTS NAMES Docker containers for project roottestencrypteddisk-gw1 are CONTAINER ID IMAGE COMMAND CREATED STATUS PORTS NAMES Docker containers for project roottestexternalhttpauthenticator-gw7 are CONTAINER ID IMAGE COMMAND CREATED STATUS PORTS NAMES Docker containers for project roottesthedgedrequestsparallel-gw3 are CONTAINER ID IMAGE COMMAND CREATED STATUS PORTS NAMES Docker volumes for project roottestgraphitemergetree-gw5 are DRIVER VOLUME NAME Cleanup called Docker volumes for project roottestexternalhttpauthenticator-gw7 are DRIVER VOLUME NAME Docker volumes for project roottestfilecluster-gw2 are DRIVER VOLUME NAME Cleanup called Cleanup called Docker volumes for project roottestkeeperclient-gw4 are DRIVER VOLUME NAME Cleanup called Docker volumes for project roottestformatschemaonserver-gw6 are DRIVER VOLUME NAME Cleanup called Docker volumes for project roottestgrpcprotocolssl-gw9 are DRIVER VOLUME NAME Cleanup called Docker volumes for project roottestjbodloadbalancing-gw8 are DRIVER VOLUME NAME Cleanup called Docker volumes for project roottestinsertdistributedasyncsend-gw0 are DRIVER VOLUME NAME Cleanup called Docker volumes for project roottestencrypteddisk-gw1 are DRIVER VOLUME NAME Cleanup called Docker networks for project roottestgraphitemergetree-gw5 are NETWORK ID NAME DRIVER SCOPE Docker volumes for project roottesthedgedrequestsparallel-gw3 are DRIVER VOLUME NAME Cleanup called Docker networks for project roottestkeeperclient-gw4 are NETWORK ID NAME DRIVER SCOPE Docker networks for project roottestfilecluster-gw2 are NETWORK ID NAME DRIVER SCOPE Docker networks for project roottestexternalhttpauthenticator-gw7 are NETWORK ID NAME DRIVER SCOPE Docker networks for project roottestjbodloadbalancing-gw8 are NETWORK ID NAME DRIVER SCOPE Docker networks for project roottestinsertdistributedasyncsend-gw0 are NETWORK ID NAME DRIVER SCOPE Docker networks for project roottestformatschemaonserver-gw6 are NETWORK ID NAME DRIVER SCOPE Docker networks for project roottestgrpcprotocolssl-gw9 are NETWORK ID NAME DRIVER SCOPE Docker containers for project roottestkeeperclient-gw4 are CONTAINER ID IMAGE COMMAND CREATED STATUS PORTS NAMES Docker containers for project roottestfilecluster-gw2 are CONTAINER ID IMAGE COMMAND CREATED STATUS PORTS NAMES Docker containers for project roottestgraphitemergetree-gw5 are CONTAINER ID IMAGE COMMAND CREATED STATUS PORTS NAMES Docker containers for project roottestformatschemaonserver-gw6 are CONTAINER ID IMAGE COMMAND CREATED STATUS PORTS NAMES Docker containers for project roottestexternalhttpauthenticator-gw7 are CONTAINER ID IMAGE COMMAND CREATED STATUS PORTS NAMES Docker containers for project roottestgrpcprotocolssl-gw9 are CONTAINER ID IMAGE COMMAND CREATED STATUS PORTS NAMES Docker containers for project roottestinsertdistributedasyncsend-gw0 are CONTAINER ID IMAGE COMMAND CREATED STATUS PORTS NAMES Docker networks for project roottesthedgedrequestsparallel-gw3 are NETWORK ID NAME DRIVER SCOPE Docker networks for project roottestencrypteddisk-gw1 are NETWORK ID NAME DRIVER SCOPE Docker containers for project roottestjbodloadbalancing-gw8 are CONTAINER ID IMAGE COMMAND CREATED STATUS PORTS NAMES Docker volumes for project roottestfilecluster-gw2 are DRIVER VOLUME NAME Command:[docker container list --all --filter name='^/roottestfilecluster-gw2-.*-1$' --format '{{.ID}}:{{.Names}}'] Docker volumes for project roottestkeeperclient-gw4 are DRIVER VOLUME NAME Command:[docker container list --all --filter name='^/roottestkeeperclient-gw4-.*-1$' --format '{{.ID}}:{{.Names}}'] Docker volumes for project roottestgraphitemergetree-gw5 are DRIVER VOLUME NAME Command:[docker container list --all --filter name='^/roottestgraphitemergetree-gw5-.*-1$' --format '{{.ID}}:{{.Names}}'] Docker volumes for project roottestexternalhttpauthenticator-gw7 are DRIVER VOLUME NAME Command:[docker container list --all --filter name='^/roottestexternalhttpauthenticator-gw7-.*-1$' --format '{{.ID}}:{{.Names}}'] Docker containers for project roottesthedgedrequestsparallel-gw3 are CONTAINER ID IMAGE COMMAND CREATED STATUS PORTS NAMES Docker volumes for project roottestformatschemaonserver-gw6 are DRIVER VOLUME NAME Command:[docker container list --all --filter name='^/roottestformatschemaonserver-gw6-.*-1$' --format '{{.ID}}:{{.Names}}'] Docker volumes for project roottestinsertdistributedasyncsend-gw0 are DRIVER VOLUME NAME Docker volumes for project roottestgrpcprotocolssl-gw9 are DRIVER VOLUME NAME Command:[docker container list --all --filter name='^/roottestinsertdistributedasyncsend-gw0-.*-1$' --format '{{.ID}}:{{.Names}}'] Docker volumes for project roottestjbodloadbalancing-gw8 are DRIVER VOLUME NAME Command:[docker container list --all --filter name='^/roottestgrpcprotocolssl-gw9-.*-1$' --format '{{.ID}}:{{.Names}}'] Command:[docker container list --all --filter name='^/roottestjbodloadbalancing-gw8-.*-1$' --format '{{.ID}}:{{.Names}}'] Unstopped containers: {} Docker containers for project roottestencrypteddisk-gw1 are CONTAINER ID IMAGE COMMAND CREATED STATUS PORTS NAMES No running containers for project: roottestfilecluster-gw2 Trying to prune unused networks... Unstopped containers: {} No running containers for project: roottestkeeperclient-gw4 Trying to prune unused networks... Unstopped containers: {} No running containers for project: roottestgraphitemergetree-gw5 Trying to prune unused networks... Unstopped containers: {} No running containers for project: roottestexternalhttpauthenticator-gw7 Trying to prune unused networks... Docker volumes for project roottesthedgedrequestsparallel-gw3 are DRIVER VOLUME NAME Command:[docker container list --all --filter name='^/roottesthedgedrequestsparallel-gw3-.*-1$' --format '{{.ID}}:{{.Names}}'] Unstopped containers: {} No running containers for project: roottestformatschemaonserver-gw6 Trying to prune unused networks... Unstopped containers: {} Unstopped containers: {} Unstopped containers: {} No running containers for project: roottestinsertdistributedasyncsend-gw0 Docker volumes for project roottestencrypteddisk-gw1 are DRIVER VOLUME NAME No running containers for project: roottestjbodloadbalancing-gw8 No running containers for project: roottestgrpcprotocolssl-gw9 Trying to prune unused networks... Trying to prune unused networks... Trying to prune unused networks... Command:[docker container list --all --filter name='^/roottestencrypteddisk-gw1-.*-1$' --format '{{.ID}}:{{.Names}}'] Trying to prune unused images... Command:[docker image prune -f] Trying to prune unused images... Command:[docker image prune -f] Trying to prune unused images... Command:[docker image prune -f] Unstopped containers: {} No running containers for project: roottesthedgedrequestsparallel-gw3 Trying to prune unused networks... Trying to prune unused images... Command:[docker image prune -f] Trying to prune unused images... Command:[docker image prune -f] Trying to prune unused images... Command:[docker image prune -f] Trying to prune unused images... Stdout:Total reclaimed space: 0B Command:[docker image prune -f] Images pruned Trying to prune unused volumes... Command:[docker volume ls | wc -l] Trying to prune unused images... Unstopped containers: {} No running containers for project: roottestencrypteddisk-gw1 Trying to prune unused networks... Command:[docker image prune -f] Stdout:Total reclaimed space: 0B Images pruned Trying to prune unused volumes... Command:[docker volume ls | wc -l] Stdout:Total reclaimed space: 0B Images pruned Trying to prune unused volumes... Command:[docker volume ls | wc -l] Stdout:1 Volumes pruned: 1 Setup directory for instance: s0_0_0 Create directory for configuration generated in this helper Create directory for common tests configuration Copy common configuration from helpers Stderr:Error response from daemon: a prune operation is already running Stderr:Error response from daemon: a prune operation is already running Generate and write macros file Stdout:1 Volumes pruned: 1 Trying to prune unused images... Stderr:Error response from daemon: a prune operation is already running Command:[docker image prune -f] Setup directory for instance: node Copy custom test config files ['/ClickHouse/tests/integration/test_file_cluster/configs/cluster.xml'] to /ClickHouse/tests/integration/test_file_cluster/_instances-0-gw2/s0_0_0/configs/config.d Exitcode:1 Trying to prune unused volumes... Command:[docker volume ls | wc -l] Exitcode:1 Trying to prune unused volumes... Create directory for configuration generated in this helper Command:[docker volume ls | wc -l] Create directory for common tests configuration Copy common configuration from helpers Trying to prune unused images... Command:[docker image prune -f] Setup database dir /ClickHouse/tests/integration/test_file_cluster/_instances-0-gw2/s0_0_0/database Exitcode:1 Setup logs dir /ClickHouse/tests/integration/test_file_cluster/_instances-0-gw2/s0_0_0/logs Trying to prune unused volumes... Entrypoint cmd: ["clickhouse", "server", "--config-file=/etc/clickhouse-server/config.xml", "--log-file=/var/log/clickhouse-server/clickhouse-server.log", "--errorlog-file=/var/log/clickhouse-server/clickhouse-server.err.log", "--"] Command:[docker volume ls | wc -l] Generate and write macros file Setup directory for instance: s0_0_1 Stderr:Error response from daemon: a prune operation is already running Create directory for configuration generated in this helper Create directory for common tests configuration Copy custom test config files ['/ClickHouse/tests/integration/test_keeper_client/configs/keeper_config.xml'] to /ClickHouse/tests/integration/test_keeper_client/_instances-0-gw4/node/configs/config.d Copy common configuration from helpers Generate and write macros file Copy custom test config files ['/ClickHouse/tests/integration/test_file_cluster/configs/cluster.xml'] to /ClickHouse/tests/integration/test_file_cluster/_instances-0-gw2/s0_0_1/configs/config.d Setup database dir /ClickHouse/tests/integration/test_file_cluster/_instances-0-gw2/s0_0_1/database Setup logs dir /ClickHouse/tests/integration/test_file_cluster/_instances-0-gw2/s0_0_1/logs Entrypoint cmd: ["clickhouse", "server", "--config-file=/etc/clickhouse-server/config.xml", "--log-file=/var/log/clickhouse-server/clickhouse-server.log", "--errorlog-file=/var/log/clickhouse-server/clickhouse-server.err.log", "--"] Setup database dir /ClickHouse/tests/integration/test_keeper_client/_instances-0-gw4/node/database Setup directory for instance: s0_1_0 Setup logs dir /ClickHouse/tests/integration/test_keeper_client/_instances-0-gw4/node/logs Exitcode:1 Create directory for configuration generated in this helper Trying to prune unused volumes... Command:[docker volume ls | wc -l] Create directory for common tests configuration Entrypoint cmd: bash -c "trap 'pkill tail' INT TERM; clickhouse server --config-file=/etc/clickhouse-server/config.xml --log-file=/var/log/clickhouse-server/clickhouse-server.log --errorlog-file=/var/log/clickhouse-server/clickhouse-server.err.log --daemon -- ; coproc tail -f /dev/null; wait $$!" Copy common configuration from helpers Env {'ASAN_OPTIONS': 'use_sigaltstack=0', 'TSAN_OPTIONS': 'use_sigaltstack=0', 'CLICKHOUSE_WATCHDOG_ENABLE': '0', 'CLICKHOUSE_NATS_TLS_SECURE': '0', 'LLVM_PROFILE_FILE': '/var/lib/clickhouse/server_%h_%p_%m.profraw', 'keeper_binary': '/clickhouse', 'keeper_cmd_prefix': 'clickhouse keeper', 'image': 'altinityinfra/integration-test:8b2301119731', 'user': '0', 'keeper_fs': 'bind', 'keeper_logs_dir1': '/ClickHouse/tests/integration/test_keeper_client/_instances-0-gw4/keeper1/log', 'keeper_config_dir1': '/ClickHouse/tests/integration/test_keeper_client/_instances-0-gw4/keeper1/config', 'keeper_db_dir1': '/ClickHouse/tests/integration/test_keeper_client/_instances-0-gw4/keeper1/coordination', 'keeper_logs_dir2': '/ClickHouse/tests/integration/test_keeper_client/_instances-0-gw4/keeper2/log', 'keeper_config_dir2': '/ClickHouse/tests/integration/test_keeper_client/_instances-0-gw4/keeper2/config', 'keeper_db_dir2': '/ClickHouse/tests/integration/test_keeper_client/_instances-0-gw4/keeper2/coordination', 'keeper_logs_dir3': '/ClickHouse/tests/integration/test_keeper_client/_instances-0-gw4/keeper3/log', 'keeper_config_dir3': '/ClickHouse/tests/integration/test_keeper_client/_instances-0-gw4/keeper3/config', 'keeper_db_dir3': '/ClickHouse/tests/integration/test_keeper_client/_instances-0-gw4/keeper3/coordination'} stored in /ClickHouse/tests/integration/test_keeper_client/_instances-0-gw4/.env Generate and write macros file Copy custom test config files ['/ClickHouse/tests/integration/test_file_cluster/configs/cluster.xml'] to /ClickHouse/tests/integration/test_file_cluster/_instances-0-gw2/s0_1_0/configs/config.d Stdout:Total reclaimed space: 0B Images pruned Trying to prune unused volumes... Command:[docker volume ls | wc -l] Setup database dir /ClickHouse/tests/integration/test_file_cluster/_instances-0-gw2/s0_1_0/database Setup logs dir /ClickHouse/tests/integration/test_file_cluster/_instances-0-gw2/s0_1_0/logs Entrypoint cmd: ["clickhouse", "server", "--config-file=/etc/clickhouse-server/config.xml", "--log-file=/var/log/clickhouse-server/clickhouse-server.log", "--errorlog-file=/var/log/clickhouse-server/clickhouse-server.err.log", "--"] Env {'ASAN_OPTIONS': 'use_sigaltstack=0', 'TSAN_OPTIONS': 'use_sigaltstack=0', 'CLICKHOUSE_WATCHDOG_ENABLE': '0', 'CLICKHOUSE_NATS_TLS_SECURE': '0', 'LLVM_PROFILE_FILE': '/var/lib/clickhouse/server_%h_%p_%m.profraw', 'keeper_binary': '/clickhouse', 'keeper_cmd_prefix': 'clickhouse keeper', 'image': 'altinityinfra/integration-test:8b2301119731', 'user': '0', 'keeper_fs': 'bind', 'keeper_logs_dir1': '/ClickHouse/tests/integration/test_file_cluster/_instances-0-gw2/keeper1/log', 'keeper_config_dir1': '/ClickHouse/tests/integration/test_file_cluster/_instances-0-gw2/keeper1/config', 'keeper_db_dir1': '/ClickHouse/tests/integration/test_file_cluster/_instances-0-gw2/keeper1/coordination', 'keeper_logs_dir2': '/ClickHouse/tests/integration/test_file_cluster/_instances-0-gw2/keeper2/log', 'keeper_config_dir2': '/ClickHouse/tests/integration/test_file_cluster/_instances-0-gw2/keeper2/config', 'keeper_db_dir2': '/ClickHouse/tests/integration/test_file_cluster/_instances-0-gw2/keeper2/coordination', 'keeper_logs_dir3': '/ClickHouse/tests/integration/test_file_cluster/_instances-0-gw2/keeper3/log', 'keeper_config_dir3': '/ClickHouse/tests/integration/test_file_cluster/_instances-0-gw2/keeper3/config', 'keeper_db_dir3': '/ClickHouse/tests/integration/test_file_cluster/_instances-0-gw2/keeper3/coordination'} stored in /ClickHouse/tests/integration/test_file_cluster/_instances-0-gw2/.env Trying paths: ['/root/.docker/config.json', '/root/.dockercfg'] No config file found Trying paths: ['/root/.docker/config.json', '/root/.dockercfg'] No config file found Trying paths: ['/root/.docker/config.json', '/root/.dockercfg'] No config file found Trying paths: ['/root/.docker/config.json', '/root/.dockercfg'] No config file found Stdout:1 Volumes pruned: 1 Setup directory for instance: instance Create directory for configuration generated in this helper Create directory for common tests configuration Copy common configuration from helpers Generate and write macros file Copy custom test config files ['/ClickHouse/tests/integration/test_graphite_merge_tree/configs/graphite_rollup.xml'] to /ClickHouse/tests/integration/test_graphite_merge_tree/_instances-0-gw5/instance/configs/config.d Setup database dir /ClickHouse/tests/integration/test_graphite_merge_tree/_instances-0-gw5/instance/database Setup logs dir /ClickHouse/tests/integration/test_graphite_merge_tree/_instances-0-gw5/instance/logs Entrypoint cmd: ["clickhouse", "server", "--config-file=/etc/clickhouse-server/config.xml", "--log-file=/var/log/clickhouse-server/clickhouse-server.log", "--errorlog-file=/var/log/clickhouse-server/clickhouse-server.err.log", "--"] Env {'ASAN_OPTIONS': 'use_sigaltstack=0', 'TSAN_OPTIONS': 'use_sigaltstack=0', 'CLICKHOUSE_WATCHDOG_ENABLE': '0', 'CLICKHOUSE_NATS_TLS_SECURE': '0', 'LLVM_PROFILE_FILE': '/var/lib/clickhouse/server_%h_%p_%m.profraw'} stored in /ClickHouse/tests/integration/test_graphite_merge_tree/_instances-0-gw5/.env Trying paths: ['/root/.docker/config.json', '/root/.dockercfg'] No config file found Trying paths: ['/root/.docker/config.json', '/root/.dockercfg'] No config file found http://localhost:None "GET /version HTTP/1.1" 200 826 Command:[docker compose --env-file /ClickHouse/tests/integration/test_keeper_client/_instances-0-gw4/.env --project-name roottestkeeperclient-gw4 --file /ClickHouse/tests/integration/test_keeper_client/_instances-0-gw4/node/docker-compose.yml --file /ClickHouse/tests/integration/helpers/../../../tests/integration/compose/docker_compose_keeper.yml pull] http://localhost:None "GET /version HTTP/1.1" 200 826 Command:[docker compose --env-file /ClickHouse/tests/integration/test_file_cluster/_instances-0-gw2/.env --project-name roottestfilecluster-gw2 --file /ClickHouse/tests/integration/test_file_cluster/_instances-0-gw2/s0_0_0/docker-compose.yml --file /ClickHouse/tests/integration/helpers/../../../tests/integration/compose/docker_compose_keeper.yml --file /ClickHouse/tests/integration/test_file_cluster/_instances-0-gw2/s0_0_1/docker-compose.yml --file /ClickHouse/tests/integration/test_file_cluster/_instances-0-gw2/s0_1_0/docker-compose.yml pull] Stdout:1 Volumes pruned: 1 Stdout:1 Setup directory for instance: node Volumes pruned: 1 Setup directory for instance: instance Stderr:Error response from daemon: a prune operation is already running Exitcode:1 Trying to prune unused volumes... Command:[docker volume ls | wc -l] Create directory for configuration generated in this helper Create directory for configuration generated in this helper Create directory for common tests configuration Stdout:1 Create directory for common tests configuration Volumes pruned: 1 Setup directory for instance: node Copy common configuration from helpers Copy common configuration from helpers Create directory for configuration generated in this helper Create directory for common tests configuration Generate and write macros file Copy common configuration from helpers Generate and write macros file Copy custom test config files [] to /ClickHouse/tests/integration/test_format_schema_on_server/_instances-0-gw6/instance/configs/config.d Copy custom test config files ['/ClickHouse/tests/integration/test_external_http_authenticator/configs/config.xml'] to /ClickHouse/tests/integration/test_external_http_authenticator/_instances-0-gw7/node/configs/config.d Setup database dir /ClickHouse/tests/integration/test_format_schema_on_server/_instances-0-gw6/instance/database Generate and write macros file Database files taken from /ClickHouse/tests/integration/test_format_schema_on_server/clickhouse_path Copy custom test config files ['/ClickHouse/tests/integration/test_jbod_load_balancing/configs/config.d/storage_configuration.xml'] to /ClickHouse/tests/integration/test_jbod_load_balancing/_instances-0-gw8/node/configs/config.d Setup database dir /ClickHouse/tests/integration/test_jbod_load_balancing/_instances-0-gw8/node/database Database copied from /ClickHouse/tests/integration/test_format_schema_on_server/clickhouse_path to /ClickHouse/tests/integration/test_format_schema_on_server/_instances-0-gw6/instance/database Setup logs dir /ClickHouse/tests/integration/test_jbod_load_balancing/_instances-0-gw8/node/logs Entrypoint cmd: ["clickhouse", "server", "--config-file=/etc/clickhouse-server/config.xml", "--log-file=/var/log/clickhouse-server/clickhouse-server.log", "--errorlog-file=/var/log/clickhouse-server/clickhouse-server.err.log", "--"] Setup logs dir /ClickHouse/tests/integration/test_format_schema_on_server/_instances-0-gw6/instance/logs Env {'ASAN_OPTIONS': 'use_sigaltstack=0', 'TSAN_OPTIONS': 'use_sigaltstack=0', 'CLICKHOUSE_WATCHDOG_ENABLE': '0', 'CLICKHOUSE_NATS_TLS_SECURE': '0', 'LLVM_PROFILE_FILE': '/var/lib/clickhouse/server_%h_%p_%m.profraw'} stored in /ClickHouse/tests/integration/test_jbod_load_balancing/_instances-0-gw8/.env Entrypoint cmd: ["clickhouse", "server", "--config-file=/etc/clickhouse-server/config.xml", "--log-file=/var/log/clickhouse-server/clickhouse-server.log", "--errorlog-file=/var/log/clickhouse-server/clickhouse-server.err.log", "--"] Env {'ASAN_OPTIONS': 'use_sigaltstack=0', 'TSAN_OPTIONS': 'use_sigaltstack=0', 'CLICKHOUSE_WATCHDOG_ENABLE': '0', 'CLICKHOUSE_NATS_TLS_SECURE': '0', 'LLVM_PROFILE_FILE': '/var/lib/clickhouse/server_%h_%p_%m.profraw'} stored in /ClickHouse/tests/integration/test_format_schema_on_server/_instances-0-gw6/.env Trying paths: ['/root/.docker/config.json', '/root/.dockercfg'] No config file found Trying paths: ['/root/.docker/config.json', '/root/.dockercfg'] No config file found http://localhost:None "GET /version HTTP/1.1" 200 826 Trying paths: ['/root/.docker/config.json', '/root/.dockercfg'] Stdout:1 Volumes pruned: 1 No config file found Setup directory for instance: n1 Trying paths: ['/root/.docker/config.json', '/root/.dockercfg'] No config file found Command:[docker compose --env-file /ClickHouse/tests/integration/test_graphite_merge_tree/_instances-0-gw5/.env --project-name roottestgraphitemergetree-gw5 --file /ClickHouse/tests/integration/test_graphite_merge_tree/_instances-0-gw5/instance/docker-compose.yml pull] Create directory for configuration generated in this helper Create directory for common tests configuration Copy common configuration from helpers Stdout:Total reclaimed space: 0B Images pruned Trying to prune unused volumes... Command:[docker volume ls | wc -l] Generate and write macros file Copy custom test config files ['/ClickHouse/tests/integration/test_insert_distributed_async_send/configs/remote_servers.xml'] to /ClickHouse/tests/integration/test_insert_distributed_async_send/_instances-0-gw0/n1/configs/config.d Setup database dir /ClickHouse/tests/integration/test_external_http_authenticator/_instances-0-gw7/node/database Setup logs dir /ClickHouse/tests/integration/test_external_http_authenticator/_instances-0-gw7/node/logs Entrypoint cmd: ["clickhouse", "server", "--config-file=/etc/clickhouse-server/config.xml", "--log-file=/var/log/clickhouse-server/clickhouse-server.log", "--errorlog-file=/var/log/clickhouse-server/clickhouse-server.err.log", "--"] Env {'ASAN_OPTIONS': 'use_sigaltstack=0', 'TSAN_OPTIONS': 'use_sigaltstack=0', 'CLICKHOUSE_WATCHDOG_ENABLE': '0', 'CLICKHOUSE_NATS_TLS_SECURE': '0', 'LLVM_PROFILE_FILE': '/var/lib/clickhouse/server_%h_%p_%m.profraw'} stored in /ClickHouse/tests/integration/test_external_http_authenticator/_instances-0-gw7/.env Trying paths: ['/root/.docker/config.json', '/root/.dockercfg'] No config file found Trying paths: ['/root/.docker/config.json', '/root/.dockercfg'] Setup database dir /ClickHouse/tests/integration/test_insert_distributed_async_send/_instances-0-gw0/n1/database No config file found Setup logs dir /ClickHouse/tests/integration/test_insert_distributed_async_send/_instances-0-gw0/n1/logs Entrypoint cmd: ["clickhouse", "server", "--config-file=/etc/clickhouse-server/config.xml", "--log-file=/var/log/clickhouse-server/clickhouse-server.log", "--errorlog-file=/var/log/clickhouse-server/clickhouse-server.err.log", "--"] Setup directory for instance: n2 Create directory for configuration generated in this helper Create directory for common tests configuration Copy common configuration from helpers Generate and write macros file Copy custom test config files ['/ClickHouse/tests/integration/test_insert_distributed_async_send/configs/remote_servers.xml'] to /ClickHouse/tests/integration/test_insert_distributed_async_send/_instances-0-gw0/n2/configs/config.d Stdout:1 Setup database dir /ClickHouse/tests/integration/test_insert_distributed_async_send/_instances-0-gw0/n2/database Volumes pruned: 1 Setup directory for instance: node Setup logs dir /ClickHouse/tests/integration/test_insert_distributed_async_send/_instances-0-gw0/n2/logs Entrypoint cmd: ["clickhouse", "server", "--config-file=/etc/clickhouse-server/config.xml", "--log-file=/var/log/clickhouse-server/clickhouse-server.log", "--errorlog-file=/var/log/clickhouse-server/clickhouse-server.err.log", "--"] Setup directory for instance: n3 Create directory for configuration generated in this helper Create directory for configuration generated in this helper Create directory for common tests configuration Create directory for common tests configuration Copy common configuration from helpers Copy common configuration from helpers Generate and write macros file Generate and write macros file Copy custom test config files ['/ClickHouse/tests/integration/test_insert_distributed_async_send/configs/remote_servers_split.xml'] to /ClickHouse/tests/integration/test_insert_distributed_async_send/_instances-0-gw0/n3/configs/config.d Copy custom test config files ['/ClickHouse/tests/integration/test_grpc_protocol_ssl/configs/grpc_config.xml', '/ClickHouse/tests/integration/test_grpc_protocol_ssl/configs/server-key.pem', '/ClickHouse/tests/integration/test_grpc_protocol_ssl/configs/server-cert.pem', '/ClickHouse/tests/integration/test_grpc_protocol_ssl/configs/ca-cert.pem'] to /ClickHouse/tests/integration/test_grpc_protocol_ssl/_instances-0-gw9/node/configs/config.d Setup database dir /ClickHouse/tests/integration/test_insert_distributed_async_send/_instances-0-gw0/n3/database Setup logs dir /ClickHouse/tests/integration/test_insert_distributed_async_send/_instances-0-gw0/n3/logs Entrypoint cmd: ["clickhouse", "server", "--config-file=/etc/clickhouse-server/config.xml", "--log-file=/var/log/clickhouse-server/clickhouse-server.log", "--errorlog-file=/var/log/clickhouse-server/clickhouse-server.err.log", "--"] Setup directory for instance: n4 http://localhost:None "GET /version HTTP/1.1" 200 826 Create directory for configuration generated in this helper Create directory for common tests configuration http://localhost:None "GET /version HTTP/1.1" 200 826 Setup database dir /ClickHouse/tests/integration/test_grpc_protocol_ssl/_instances-0-gw9/node/database Copy common configuration from helpers Command:[docker compose --env-file /ClickHouse/tests/integration/test_format_schema_on_server/_instances-0-gw6/.env --project-name roottestformatschemaonserver-gw6 --file /ClickHouse/tests/integration/test_format_schema_on_server/_instances-0-gw6/instance/docker-compose.yml pull] http://localhost:None "GET /version HTTP/1.1" 200 826 Setup logs dir /ClickHouse/tests/integration/test_grpc_protocol_ssl/_instances-0-gw9/node/logs Entrypoint cmd: ["clickhouse", "server", "--config-file=/etc/clickhouse-server/config.xml", "--log-file=/var/log/clickhouse-server/clickhouse-server.log", "--errorlog-file=/var/log/clickhouse-server/clickhouse-server.err.log", "--"] Env {'ASAN_OPTIONS': 'use_sigaltstack=0', 'TSAN_OPTIONS': 'report_atomic_races=0 halt_on_error=1 abort_on_error=1 history_size=7 memory_limit_mb=46080 second_deadlock_stack=1', 'CLICKHOUSE_WATCHDOG_ENABLE': '0', 'CLICKHOUSE_NATS_TLS_SECURE': '0', 'LLVM_PROFILE_FILE': '/var/lib/clickhouse/server_%h_%p_%m.profraw'} stored in /ClickHouse/tests/integration/test_grpc_protocol_ssl/_instances-0-gw9/.env Command:[docker compose --env-file /ClickHouse/tests/integration/test_jbod_load_balancing/_instances-0-gw8/.env --project-name roottestjbodloadbalancing-gw8 --file /ClickHouse/tests/integration/test_jbod_load_balancing/_instances-0-gw8/node/docker-compose.yml pull] Generate and write macros file Command:[docker compose --env-file /ClickHouse/tests/integration/test_external_http_authenticator/_instances-0-gw7/.env --project-name roottestexternalhttpauthenticator-gw7 --file /ClickHouse/tests/integration/test_external_http_authenticator/_instances-0-gw7/node/docker-compose.yml pull] Copy custom test config files ['/ClickHouse/tests/integration/test_insert_distributed_async_send/configs/remote_servers_split.xml'] to /ClickHouse/tests/integration/test_insert_distributed_async_send/_instances-0-gw0/n4/configs/config.d Setup database dir /ClickHouse/tests/integration/test_insert_distributed_async_send/_instances-0-gw0/n4/database Setup logs dir /ClickHouse/tests/integration/test_insert_distributed_async_send/_instances-0-gw0/n4/logs Entrypoint cmd: ["clickhouse", "server", "--config-file=/etc/clickhouse-server/config.xml", "--log-file=/var/log/clickhouse-server/clickhouse-server.log", "--errorlog-file=/var/log/clickhouse-server/clickhouse-server.err.log", "--"] Env {'ASAN_OPTIONS': 'use_sigaltstack=0', 'TSAN_OPTIONS': 'use_sigaltstack=0', 'CLICKHOUSE_WATCHDOG_ENABLE': '0', 'CLICKHOUSE_NATS_TLS_SECURE': '0', 'LLVM_PROFILE_FILE': '/var/lib/clickhouse/server_%h_%p_%m.profraw'} stored in /ClickHouse/tests/integration/test_insert_distributed_async_send/_instances-0-gw0/.env Trying paths: ['/root/.docker/config.json', '/root/.dockercfg'] No config file found Trying paths: ['/root/.docker/config.json', '/root/.dockercfg'] No config file found Trying paths: ['/root/.docker/config.json', '/root/.dockercfg'] No config file found Trying paths: ['/root/.docker/config.json', '/root/.dockercfg'] No config file found Stdout:1 Volumes pruned: 1 Setup directory for instance: node Create directory for configuration generated in this helper Create directory for common tests configuration Copy common configuration from helpers Stdout:1 Generate and write macros file Volumes pruned: 1 Setup directory for instance: node Copy custom test config files ['/ClickHouse/tests/integration/test_hedged_requests_parallel/configs/remote_servers.xml'] to /ClickHouse/tests/integration/test_hedged_requests_parallel/_instances-0-gw3/node/configs/config.d Create directory for configuration generated in this helper Create directory for common tests configuration Copy common configuration from helpers Setup database dir /ClickHouse/tests/integration/test_hedged_requests_parallel/_instances-0-gw3/node/database Setup logs dir /ClickHouse/tests/integration/test_hedged_requests_parallel/_instances-0-gw3/node/logs Entrypoint cmd: bash -c "trap 'pkill tail' INT TERM; clickhouse server --config-file=/etc/clickhouse-server/config.xml --log-file=/var/log/clickhouse-server/clickhouse-server.log --errorlog-file=/var/log/clickhouse-server/clickhouse-server.err.log --daemon -- ; coproc tail -f /dev/null; wait $$!" Generate and write macros file Setup directory for instance: node_1 Copy custom test config files ['/ClickHouse/tests/integration/test_encrypted_disk/configs/storage.xml', '/ClickHouse/tests/integration/test_encrypted_disk/configs/allow_backup_path.xml'] to /ClickHouse/tests/integration/test_encrypted_disk/_instances-0-gw1/node/configs/config.d Create directory for configuration generated in this helper Create directory for common tests configuration Copy common configuration from helpers http://localhost:None "GET /version HTTP/1.1" 200 826 Setup database dir /ClickHouse/tests/integration/test_encrypted_disk/_instances-0-gw1/node/database Setup logs dir /ClickHouse/tests/integration/test_encrypted_disk/_instances-0-gw1/node/logs Generate and write macros file Entrypoint cmd: bash -c "trap 'pkill tail' INT TERM; clickhouse server --config-file=/etc/clickhouse-server/config.xml --log-file=/var/log/clickhouse-server/clickhouse-server.log --errorlog-file=/var/log/clickhouse-server/clickhouse-server.err.log --daemon -- ; coproc tail -f /dev/null; wait $$!" external_dir_abs_path=/ClickHouse/tests/integration/test_encrypted_disk/_instances-0-gw1/backups Copy custom test config files [] to /ClickHouse/tests/integration/test_hedged_requests_parallel/_instances-0-gw3/node_1/configs/config.d Env {'ASAN_OPTIONS': 'use_sigaltstack=0', 'TSAN_OPTIONS': 'use_sigaltstack=0', 'CLICKHOUSE_WATCHDOG_ENABLE': '0', 'CLICKHOUSE_NATS_TLS_SECURE': '0', 'LLVM_PROFILE_FILE': '/var/lib/clickhouse/server_%h_%p_%m.profraw', 'MINIO_CERTS_DIR': '/ClickHouse/tests/integration/test_encrypted_disk/_instances-0-gw1/minio/certs', 'MINIO_DATA_DIR': '/ClickHouse/tests/integration/test_encrypted_disk/_instances-0-gw1/minio/data', 'MINIO_PORT': '9001', 'SSL_CERT_FILE': '/ClickHouse/tests/integration/test_encrypted_disk/_instances-0-gw1/minio/certs/public.crt', 'RESOLVER_LOGS': '/ClickHouse/tests/integration/test_encrypted_disk/_instances-0-gw1/resolver', 'RESOLVER_LOGS_FS': 'bind'} stored in /ClickHouse/tests/integration/test_encrypted_disk/_instances-0-gw1/.env Command:[docker compose --env-file /ClickHouse/tests/integration/test_grpc_protocol_ssl/_instances-0-gw9/.env --project-name roottestgrpcprotocolssl-gw9 --file /ClickHouse/tests/integration/test_grpc_protocol_ssl/_instances-0-gw9/node/docker-compose.yml pull] Setup database dir /ClickHouse/tests/integration/test_hedged_requests_parallel/_instances-0-gw3/node_1/database Setup logs dir /ClickHouse/tests/integration/test_hedged_requests_parallel/_instances-0-gw3/node_1/logs Entrypoint cmd: ["clickhouse", "server", "--config-file=/etc/clickhouse-server/config.xml", "--log-file=/var/log/clickhouse-server/clickhouse-server.log", "--errorlog-file=/var/log/clickhouse-server/clickhouse-server.err.log", "--"] Setup directory for instance: node_2 Create directory for configuration generated in this helper Create directory for common tests configuration Copy common configuration from helpers Trying paths: ['/root/.docker/config.json', '/root/.dockercfg'] No config file found Trying paths: ['/root/.docker/config.json', '/root/.dockercfg'] No config file found Generate and write macros file Copy custom test config files [] to /ClickHouse/tests/integration/test_hedged_requests_parallel/_instances-0-gw3/node_2/configs/config.d Setup database dir /ClickHouse/tests/integration/test_hedged_requests_parallel/_instances-0-gw3/node_2/database Setup logs dir /ClickHouse/tests/integration/test_hedged_requests_parallel/_instances-0-gw3/node_2/logs Entrypoint cmd: ["clickhouse", "server", "--config-file=/etc/clickhouse-server/config.xml", "--log-file=/var/log/clickhouse-server/clickhouse-server.log", "--errorlog-file=/var/log/clickhouse-server/clickhouse-server.err.log", "--"] Setup directory for instance: node_3 Create directory for configuration generated in this helper http://localhost:None "GET /version HTTP/1.1" 200 826 Command:[docker compose --env-file /ClickHouse/tests/integration/test_insert_distributed_async_send/_instances-0-gw0/.env --project-name roottestinsertdistributedasyncsend-gw0 --file /ClickHouse/tests/integration/test_insert_distributed_async_send/_instances-0-gw0/n1/docker-compose.yml --file /ClickHouse/tests/integration/test_insert_distributed_async_send/_instances-0-gw0/n2/docker-compose.yml --file /ClickHouse/tests/integration/test_insert_distributed_async_send/_instances-0-gw0/n3/docker-compose.yml --file /ClickHouse/tests/integration/test_insert_distributed_async_send/_instances-0-gw0/n4/docker-compose.yml pull] Create directory for common tests configuration Copy common configuration from helpers Generate and write macros file Copy custom test config files [] to /ClickHouse/tests/integration/test_hedged_requests_parallel/_instances-0-gw3/node_3/configs/config.d Setup database dir /ClickHouse/tests/integration/test_hedged_requests_parallel/_instances-0-gw3/node_3/database Setup logs dir /ClickHouse/tests/integration/test_hedged_requests_parallel/_instances-0-gw3/node_3/logs Entrypoint cmd: ["clickhouse", "server", "--config-file=/etc/clickhouse-server/config.xml", "--log-file=/var/log/clickhouse-server/clickhouse-server.log", "--errorlog-file=/var/log/clickhouse-server/clickhouse-server.err.log", "--"] Setup directory for instance: node_4 Create directory for configuration generated in this helper Create directory for common tests configuration Copy common configuration from helpers Generate and write macros file Copy custom test config files [] to /ClickHouse/tests/integration/test_hedged_requests_parallel/_instances-0-gw3/node_4/configs/config.d http://localhost:None "GET /version HTTP/1.1" 200 826 Setup database dir /ClickHouse/tests/integration/test_hedged_requests_parallel/_instances-0-gw3/node_4/database Setup logs dir /ClickHouse/tests/integration/test_hedged_requests_parallel/_instances-0-gw3/node_4/logs Entrypoint cmd: ["clickhouse", "server", "--config-file=/etc/clickhouse-server/config.xml", "--log-file=/var/log/clickhouse-server/clickhouse-server.log", "--errorlog-file=/var/log/clickhouse-server/clickhouse-server.err.log", "--"] Env {'ASAN_OPTIONS': 'use_sigaltstack=0', 'TSAN_OPTIONS': 'use_sigaltstack=0', 'CLICKHOUSE_WATCHDOG_ENABLE': '0', 'CLICKHOUSE_NATS_TLS_SECURE': '0', 'LLVM_PROFILE_FILE': '/var/lib/clickhouse/server_%h_%p_%m.profraw'} stored in /ClickHouse/tests/integration/test_hedged_requests_parallel/_instances-0-gw3/.env Command:[docker compose --env-file /ClickHouse/tests/integration/test_encrypted_disk/_instances-0-gw1/.env --project-name roottestencrypteddisk-gw1 --file /ClickHouse/tests/integration/test_encrypted_disk/_instances-0-gw1/node/docker-compose.yml --file /ClickHouse/tests/integration/helpers/../../../tests/integration/compose/docker_compose_minio.yml pull] Trying paths: ['/root/.docker/config.json', '/root/.dockercfg'] No config file found Trying paths: ['/root/.docker/config.json', '/root/.dockercfg'] No config file found http://localhost:None "GET /version HTTP/1.1" 200 826 Command:[docker compose --env-file /ClickHouse/tests/integration/test_hedged_requests_parallel/_instances-0-gw3/.env --project-name roottesthedgedrequestsparallel-gw3 --file /ClickHouse/tests/integration/test_hedged_requests_parallel/_instances-0-gw3/node/docker-compose.yml --file /ClickHouse/tests/integration/test_hedged_requests_parallel/_instances-0-gw3/node_1/docker-compose.yml --file /ClickHouse/tests/integration/test_hedged_requests_parallel/_instances-0-gw3/node_2/docker-compose.yml --file /ClickHouse/tests/integration/test_hedged_requests_parallel/_instances-0-gw3/node_3/docker-compose.yml --file /ClickHouse/tests/integration/test_hedged_requests_parallel/_instances-0-gw3/node_4/docker-compose.yml pull] Stderr: instance Pulling Stderr: instance Pulled ('Trying to create ClickHouse instance by command %s', 'docker compose --env-file /ClickHouse/tests/integration/test_graphite_merge_tree/_instances-0-gw5/.env --project-name roottestgraphitemergetree-gw5 --file /ClickHouse/tests/integration/test_graphite_merge_tree/_instances-0-gw5/instance/docker-compose.yml up -d --no-recreate') Command:[docker compose --env-file /ClickHouse/tests/integration/test_graphite_merge_tree/_instances-0-gw5/.env --project-name roottestgraphitemergetree-gw5 --file /ClickHouse/tests/integration/test_graphite_merge_tree/_instances-0-gw5/instance/docker-compose.yml up -d --no-recreate] Stderr: node Pulling Stderr: node Pulled ('Trying to create ClickHouse instance by command %s', 'docker compose --env-file /ClickHouse/tests/integration/test_jbod_load_balancing/_instances-0-gw8/.env --project-name roottestjbodloadbalancing-gw8 --file /ClickHouse/tests/integration/test_jbod_load_balancing/_instances-0-gw8/node/docker-compose.yml up -d --no-recreate') Command:[docker compose --env-file /ClickHouse/tests/integration/test_jbod_load_balancing/_instances-0-gw8/.env --project-name roottestjbodloadbalancing-gw8 --file /ClickHouse/tests/integration/test_jbod_load_balancing/_instances-0-gw8/node/docker-compose.yml up -d --no-recreate] Stderr: node Pulling Stderr: node Pulled ('Trying to create ClickHouse instance by command %s', 'docker compose --env-file /ClickHouse/tests/integration/test_grpc_protocol_ssl/_instances-0-gw9/.env --project-name roottestgrpcprotocolssl-gw9 --file /ClickHouse/tests/integration/test_grpc_protocol_ssl/_instances-0-gw9/node/docker-compose.yml up -d --no-recreate') Command:[docker compose --env-file /ClickHouse/tests/integration/test_grpc_protocol_ssl/_instances-0-gw9/.env --project-name roottestgrpcprotocolssl-gw9 --file /ClickHouse/tests/integration/test_grpc_protocol_ssl/_instances-0-gw9/node/docker-compose.yml up -d --no-recreate] Stderr: node_3 Skipped - Image is already being pulled by node_2 Stderr: node_4 Skipped - Image is already being pulled by node_2 Stderr: node Skipped - Image is already being pulled by node_2 Stderr: node_1 Skipped - Image is already being pulled by node_2 Stderr: node_2 Pulling Stderr: node_2 Pulled ('Trying to create ClickHouse instance by command %s', 'docker compose --env-file /ClickHouse/tests/integration/test_hedged_requests_parallel/_instances-0-gw3/.env --project-name roottesthedgedrequestsparallel-gw3 --file /ClickHouse/tests/integration/test_hedged_requests_parallel/_instances-0-gw3/node/docker-compose.yml --file /ClickHouse/tests/integration/test_hedged_requests_parallel/_instances-0-gw3/node_1/docker-compose.yml --file /ClickHouse/tests/integration/test_hedged_requests_parallel/_instances-0-gw3/node_2/docker-compose.yml --file /ClickHouse/tests/integration/test_hedged_requests_parallel/_instances-0-gw3/node_3/docker-compose.yml --file /ClickHouse/tests/integration/test_hedged_requests_parallel/_instances-0-gw3/node_4/docker-compose.yml up -d --no-recreate') Command:[docker compose --env-file /ClickHouse/tests/integration/test_hedged_requests_parallel/_instances-0-gw3/.env --project-name roottesthedgedrequestsparallel-gw3 --file /ClickHouse/tests/integration/test_hedged_requests_parallel/_instances-0-gw3/node/docker-compose.yml --file /ClickHouse/tests/integration/test_hedged_requests_parallel/_instances-0-gw3/node_1/docker-compose.yml --file /ClickHouse/tests/integration/test_hedged_requests_parallel/_instances-0-gw3/node_2/docker-compose.yml --file /ClickHouse/tests/integration/test_hedged_requests_parallel/_instances-0-gw3/node_3/docker-compose.yml --file /ClickHouse/tests/integration/test_hedged_requests_parallel/_instances-0-gw3/node_4/docker-compose.yml up -d --no-recreate] Stderr: zoo1 Skipped - Image is already being pulled by zoo3 Stderr: s0_0_1 Skipped - Image is already being pulled by zoo3 Stderr: s0_1_0 Skipped - Image is already being pulled by zoo3 Stderr: s0_0_0 Skipped - Image is already being pulled by zoo3 Stderr: zoo2 Skipped - Image is already being pulled by zoo3 Stderr: zoo3 Pulling Stderr: zoo3 Pulled Setup ZooKeeper Creating internal ZooKeeper dirs: ['/ClickHouse/tests/integration/test_file_cluster/_instances-0-gw2/keeper1/log', '/ClickHouse/tests/integration/test_file_cluster/_instances-0-gw2/keeper1/config', '/ClickHouse/tests/integration/test_file_cluster/_instances-0-gw2/keeper1/coordination', '/ClickHouse/tests/integration/test_file_cluster/_instances-0-gw2/keeper2/log', '/ClickHouse/tests/integration/test_file_cluster/_instances-0-gw2/keeper2/config', '/ClickHouse/tests/integration/test_file_cluster/_instances-0-gw2/keeper2/coordination', '/ClickHouse/tests/integration/test_file_cluster/_instances-0-gw2/keeper3/log', '/ClickHouse/tests/integration/test_file_cluster/_instances-0-gw2/keeper3/config', '/ClickHouse/tests/integration/test_file_cluster/_instances-0-gw2/keeper3/coordination'] Command:[docker compose --project-name roottestfilecluster-gw2 --env-file /ClickHouse/tests/integration/test_file_cluster/_instances-0-gw2/.env --file /ClickHouse/tests/integration/helpers/../../../tests/integration/compose/docker_compose_keeper.yml --verbose up -d] Stderr: n3 Skipped - Image is already being pulled by n2 Stderr: n4 Skipped - Image is already being pulled by n2 Stderr: n1 Skipped - Image is already being pulled by n2 Stderr: n2 Pulling Stderr: n2 Pulled ('Trying to create ClickHouse instance by command %s', 'docker compose --env-file /ClickHouse/tests/integration/test_insert_distributed_async_send/_instances-0-gw0/.env --project-name roottestinsertdistributedasyncsend-gw0 --file /ClickHouse/tests/integration/test_insert_distributed_async_send/_instances-0-gw0/n1/docker-compose.yml --file /ClickHouse/tests/integration/test_insert_distributed_async_send/_instances-0-gw0/n2/docker-compose.yml --file /ClickHouse/tests/integration/test_insert_distributed_async_send/_instances-0-gw0/n3/docker-compose.yml --file /ClickHouse/tests/integration/test_insert_distributed_async_send/_instances-0-gw0/n4/docker-compose.yml up -d --no-recreate') Command:[docker compose --env-file /ClickHouse/tests/integration/test_insert_distributed_async_send/_instances-0-gw0/.env --project-name roottestinsertdistributedasyncsend-gw0 --file /ClickHouse/tests/integration/test_insert_distributed_async_send/_instances-0-gw0/n1/docker-compose.yml --file /ClickHouse/tests/integration/test_insert_distributed_async_send/_instances-0-gw0/n2/docker-compose.yml --file /ClickHouse/tests/integration/test_insert_distributed_async_send/_instances-0-gw0/n3/docker-compose.yml --file /ClickHouse/tests/integration/test_insert_distributed_async_send/_instances-0-gw0/n4/docker-compose.yml up -d --no-recreate] Stderr: instance Pulling Stderr: instance Pulled ('Trying to create ClickHouse instance by command %s', 'docker compose --env-file /ClickHouse/tests/integration/test_format_schema_on_server/_instances-0-gw6/.env --project-name roottestformatschemaonserver-gw6 --file /ClickHouse/tests/integration/test_format_schema_on_server/_instances-0-gw6/instance/docker-compose.yml up -d --no-recreate') Command:[docker compose --env-file /ClickHouse/tests/integration/test_format_schema_on_server/_instances-0-gw6/.env --project-name roottestformatschemaonserver-gw6 --file /ClickHouse/tests/integration/test_format_schema_on_server/_instances-0-gw6/instance/docker-compose.yml up -d --no-recreate] Stderr: proxy2 Skipped - Image is already being pulled by proxy1 Stderr: node Pulling Stderr: proxy1 Pulling Stderr: resolver Pulling Stderr: minio1 Pulling Stderr: resolver Pulled Stderr: proxy1 Pulled Stderr: minio1 Pulled Stderr: node Pulled Trying to create Minio instance by command docker compose --project-name roottestencrypteddisk-gw1 --env-file /ClickHouse/tests/integration/test_encrypted_disk/_instances-0-gw1/.env --file /ClickHouse/tests/integration/helpers/../../../tests/integration/compose/docker_compose_minio.yml --verbose up -d Command:[docker compose --project-name roottestencrypteddisk-gw1 --env-file /ClickHouse/tests/integration/test_encrypted_disk/_instances-0-gw1/.env --file /ClickHouse/tests/integration/helpers/../../../tests/integration/compose/docker_compose_minio.yml --verbose up -d] Stderr: node Pulling Stderr: node Pulled ('Trying to create ClickHouse instance by command %s', 'docker compose --env-file /ClickHouse/tests/integration/test_external_http_authenticator/_instances-0-gw7/.env --project-name roottestexternalhttpauthenticator-gw7 --file /ClickHouse/tests/integration/test_external_http_authenticator/_instances-0-gw7/node/docker-compose.yml up -d --no-recreate') Command:[docker compose --env-file /ClickHouse/tests/integration/test_external_http_authenticator/_instances-0-gw7/.env --project-name roottestexternalhttpauthenticator-gw7 --file /ClickHouse/tests/integration/test_external_http_authenticator/_instances-0-gw7/node/docker-compose.yml up -d --no-recreate] Stderr: node Skipped - Image is already being pulled by zoo3 Stderr: zoo1 Skipped - Image is already being pulled by zoo3 Stderr: zoo2 Skipped - Image is already being pulled by zoo3 Stderr: zoo3 Pulling Stderr: zoo3 Pulled Setup ZooKeeper Creating internal ZooKeeper dirs: ['/ClickHouse/tests/integration/test_keeper_client/_instances-0-gw4/keeper1/log', '/ClickHouse/tests/integration/test_keeper_client/_instances-0-gw4/keeper1/config', '/ClickHouse/tests/integration/test_keeper_client/_instances-0-gw4/keeper1/coordination', '/ClickHouse/tests/integration/test_keeper_client/_instances-0-gw4/keeper2/log', '/ClickHouse/tests/integration/test_keeper_client/_instances-0-gw4/keeper2/config', '/ClickHouse/tests/integration/test_keeper_client/_instances-0-gw4/keeper2/coordination', '/ClickHouse/tests/integration/test_keeper_client/_instances-0-gw4/keeper3/log', '/ClickHouse/tests/integration/test_keeper_client/_instances-0-gw4/keeper3/config', '/ClickHouse/tests/integration/test_keeper_client/_instances-0-gw4/keeper3/coordination'] Command:[docker compose --project-name roottestkeeperclient-gw4 --env-file /ClickHouse/tests/integration/test_keeper_client/_instances-0-gw4/.env --file /ClickHouse/tests/integration/helpers/../../../tests/integration/compose/docker_compose_keeper.yml --verbose up -d] Stderr: Network roottestgraphitemergetree-gw5_default Creating Stderr: Network roottestgraphitemergetree-gw5_default Created Stderr: Container roottestgraphitemergetree-gw5-instance-1 Creating Stderr: Container roottestgraphitemergetree-gw5-instance-1 Created Stderr: Container roottestgraphitemergetree-gw5-instance-1 Starting Stderr: Container roottestgraphitemergetree-gw5-instance-1 Started ClickHouse instance created get_instance_ip instance_name=instance http://localhost:None "GET /v1.46/containers/roottestgraphitemergetree-gw5-instance-1/json HTTP/1.1" 200 None get_instance_ip instance_name=instance http://localhost:None "GET /v1.46/containers/roottestgraphitemergetree-gw5-instance-1/json HTTP/1.1" 200 None Waiting for ClickHouse start in instance, ip: 172.16.1.2... http://localhost:None "GET /v1.46/containers/roottestgraphitemergetree-gw5-instance-1/json HTTP/1.1" 200 None http://localhost:None "GET /v1.46/containers/accd6ee7a3ca9d42577dc02324d0f1e7ef920ba1ce52c159f15684101294e823/json HTTP/1.1" 200 None http://localhost:None "GET /v1.46/containers/accd6ee7a3ca9d42577dc02324d0f1e7ef920ba1ce52c159f15684101294e823/json HTTP/1.1" 200 None http://localhost:None "GET /v1.46/containers/accd6ee7a3ca9d42577dc02324d0f1e7ef920ba1ce52c159f15684101294e823/json HTTP/1.1" 200 None Stderr: Network roottestjbodloadbalancing-gw8_default Creating Stderr: Network roottestjbodloadbalancing-gw8_default Created Stderr: Container roottestjbodloadbalancing-gw8-node-1 Creating Stderr: Container roottestjbodloadbalancing-gw8-node-1 Created Stderr: Container roottestjbodloadbalancing-gw8-node-1 Starting Stderr: Container roottestjbodloadbalancing-gw8-node-1 Started ClickHouse instance created get_instance_ip instance_name=node http://localhost:None "GET /v1.46/containers/roottestjbodloadbalancing-gw8-node-1/json HTTP/1.1" 200 None get_instance_ip instance_name=node http://localhost:None "GET /v1.46/containers/roottestjbodloadbalancing-gw8-node-1/json HTTP/1.1" 200 None Waiting for ClickHouse start in node, ip: 172.16.2.2... http://localhost:None "GET /v1.46/containers/roottestjbodloadbalancing-gw8-node-1/json HTTP/1.1" 200 None http://localhost:None "GET /v1.46/containers/9b3ce4ed0342d4871460a495dedc7477f44ca38fd9d970083df254348c48e8ce/json HTTP/1.1" 200 None http://localhost:None "GET /v1.46/containers/accd6ee7a3ca9d42577dc02324d0f1e7ef920ba1ce52c159f15684101294e823/json HTTP/1.1" 200 None http://localhost:None "GET /v1.46/containers/9b3ce4ed0342d4871460a495dedc7477f44ca38fd9d970083df254348c48e8ce/json HTTP/1.1" 200 None http://localhost:None "GET /v1.46/containers/accd6ee7a3ca9d42577dc02324d0f1e7ef920ba1ce52c159f15684101294e823/json HTTP/1.1" 200 None Stderr: Network roottestgrpcprotocolssl-gw9_default Creating Stderr: Network roottestgrpcprotocolssl-gw9_default Created Stderr: Container roottestgrpcprotocolssl-gw9-node-1 Creating Stderr: Container roottestgrpcprotocolssl-gw9-node-1 Created Stderr: Container roottestgrpcprotocolssl-gw9-node-1 Starting Stderr: Container roottestgrpcprotocolssl-gw9-node-1 Started ClickHouse instance created get_instance_ip instance_name=node http://localhost:None "GET /v1.46/containers/roottestgrpcprotocolssl-gw9-node-1/json HTTP/1.1" 200 None get_instance_ip instance_name=node http://localhost:None "GET /v1.46/containers/roottestgrpcprotocolssl-gw9-node-1/json HTTP/1.1" 200 None Waiting for ClickHouse start in node, ip: 172.16.3.2... http://localhost:None "GET /v1.46/containers/roottestgrpcprotocolssl-gw9-node-1/json HTTP/1.1" 200 None http://localhost:None "GET /v1.46/containers/407ae9f96c12167ecfb8e66df71d33e9a8f7d012ad4b8091594884e3cabf56a8/json HTTP/1.1" 200 None http://localhost:None "GET /v1.46/containers/9b3ce4ed0342d4871460a495dedc7477f44ca38fd9d970083df254348c48e8ce/json HTTP/1.1" 200 None http://localhost:None "GET /v1.46/containers/accd6ee7a3ca9d42577dc02324d0f1e7ef920ba1ce52c159f15684101294e823/json HTTP/1.1" 200 None http://localhost:None "GET /v1.46/containers/407ae9f96c12167ecfb8e66df71d33e9a8f7d012ad4b8091594884e3cabf56a8/json HTTP/1.1" 200 None http://localhost:None "GET /v1.46/containers/9b3ce4ed0342d4871460a495dedc7477f44ca38fd9d970083df254348c48e8ce/json HTTP/1.1" 200 None http://localhost:None "GET /v1.46/containers/accd6ee7a3ca9d42577dc02324d0f1e7ef920ba1ce52c159f15684101294e823/json HTTP/1.1" 200 None http://localhost:None "GET /v1.46/containers/407ae9f96c12167ecfb8e66df71d33e9a8f7d012ad4b8091594884e3cabf56a8/json HTTP/1.1" 200 None http://localhost:None "GET /v1.46/containers/9b3ce4ed0342d4871460a495dedc7477f44ca38fd9d970083df254348c48e8ce/json HTTP/1.1" 200 None http://localhost:None "GET /v1.46/containers/accd6ee7a3ca9d42577dc02324d0f1e7ef920ba1ce52c159f15684101294e823/json HTTP/1.1" 200 None http://localhost:None "GET /v1.46/containers/407ae9f96c12167ecfb8e66df71d33e9a8f7d012ad4b8091594884e3cabf56a8/json HTTP/1.1" 200 None http://localhost:None "GET /v1.46/containers/9b3ce4ed0342d4871460a495dedc7477f44ca38fd9d970083df254348c48e8ce/json HTTP/1.1" 200 None http://localhost:None "GET /v1.46/containers/accd6ee7a3ca9d42577dc02324d0f1e7ef920ba1ce52c159f15684101294e823/json HTTP/1.1" 200 None http://localhost:None "GET /v1.46/containers/407ae9f96c12167ecfb8e66df71d33e9a8f7d012ad4b8091594884e3cabf56a8/json HTTP/1.1" 200 None http://localhost:None "GET /v1.46/containers/9b3ce4ed0342d4871460a495dedc7477f44ca38fd9d970083df254348c48e8ce/json HTTP/1.1" 200 None http://localhost:None "GET /v1.46/containers/accd6ee7a3ca9d42577dc02324d0f1e7ef920ba1ce52c159f15684101294e823/json HTTP/1.1" 200 None http://localhost:None "GET /v1.46/containers/407ae9f96c12167ecfb8e66df71d33e9a8f7d012ad4b8091594884e3cabf56a8/json HTTP/1.1" 200 None http://localhost:None "GET /v1.46/containers/9b3ce4ed0342d4871460a495dedc7477f44ca38fd9d970083df254348c48e8ce/json HTTP/1.1" 200 None http://localhost:None "GET /v1.46/containers/accd6ee7a3ca9d42577dc02324d0f1e7ef920ba1ce52c159f15684101294e823/json HTTP/1.1" 200 None http://localhost:None "GET /v1.46/containers/407ae9f96c12167ecfb8e66df71d33e9a8f7d012ad4b8091594884e3cabf56a8/json HTTP/1.1" 200 None http://localhost:None "GET /v1.46/containers/9b3ce4ed0342d4871460a495dedc7477f44ca38fd9d970083df254348c48e8ce/json HTTP/1.1" 200 None Stderr: Network roottesthedgedrequestsparallel-gw3_default Creating Stderr: Network roottesthedgedrequestsparallel-gw3_default Created Stderr: Container roottesthedgedrequestsparallel-gw3-node_4-1 Creating Stderr: Container roottesthedgedrequestsparallel-gw3-node-1 Creating Stderr: Container roottesthedgedrequestsparallel-gw3-node_1-1 Creating Stderr: Container roottesthedgedrequestsparallel-gw3-node_2-1 Creating Stderr: Container roottesthedgedrequestsparallel-gw3-node_3-1 Creating Stderr: Container roottesthedgedrequestsparallel-gw3-node_2-1 Created Stderr: Container roottesthedgedrequestsparallel-gw3-node_3-1 Created Stderr: Container roottesthedgedrequestsparallel-gw3-node_4-1 Created Stderr: Container roottesthedgedrequestsparallel-gw3-node_1-1 Created Stderr: Container roottesthedgedrequestsparallel-gw3-node-1 Created Stderr: Container roottesthedgedrequestsparallel-gw3-node_2-1 Starting Stderr: Container roottesthedgedrequestsparallel-gw3-node_4-1 Starting Stderr: Container roottesthedgedrequestsparallel-gw3-node_3-1 Starting Stderr: Container roottesthedgedrequestsparallel-gw3-node_1-1 Starting Stderr: Container roottesthedgedrequestsparallel-gw3-node-1 Starting Stderr: Container roottesthedgedrequestsparallel-gw3-node_3-1 Started Stderr: Container roottesthedgedrequestsparallel-gw3-node-1 Started Stderr: Container roottesthedgedrequestsparallel-gw3-node_4-1 Started Stderr: Container roottesthedgedrequestsparallel-gw3-node_1-1 Started Stderr: Container roottesthedgedrequestsparallel-gw3-node_2-1 Started ClickHouse instance created get_instance_ip instance_name=node http://localhost:None "GET /v1.46/containers/roottesthedgedrequestsparallel-gw3-node-1/json HTTP/1.1" 200 None get_instance_ip instance_name=node http://localhost:None "GET /v1.46/containers/roottesthedgedrequestsparallel-gw3-node-1/json HTTP/1.1" 200 None Waiting for ClickHouse start in node, ip: 172.16.4.6... http://localhost:None "GET /v1.46/containers/roottesthedgedrequestsparallel-gw3-node-1/json HTTP/1.1" 200 None http://localhost:None "GET /v1.46/containers/2d319a09bbc3215bd4cea587c1858b725acde83d0a214e875f58ef78959edaac/json HTTP/1.1" 200 None http://localhost:None "GET /v1.46/containers/accd6ee7a3ca9d42577dc02324d0f1e7ef920ba1ce52c159f15684101294e823/json HTTP/1.1" 200 None http://localhost:None "GET /v1.46/containers/407ae9f96c12167ecfb8e66df71d33e9a8f7d012ad4b8091594884e3cabf56a8/json HTTP/1.1" 200 None http://localhost:None "GET /v1.46/containers/9b3ce4ed0342d4871460a495dedc7477f44ca38fd9d970083df254348c48e8ce/json HTTP/1.1" 200 None http://localhost:None "GET /v1.46/containers/2d319a09bbc3215bd4cea587c1858b725acde83d0a214e875f58ef78959edaac/json HTTP/1.1" 200 None http://localhost:None "GET /v1.46/containers/accd6ee7a3ca9d42577dc02324d0f1e7ef920ba1ce52c159f15684101294e823/json HTTP/1.1" 200 None http://localhost:None "GET /v1.46/containers/407ae9f96c12167ecfb8e66df71d33e9a8f7d012ad4b8091594884e3cabf56a8/json HTTP/1.1" 200 None http://localhost:None "GET /v1.46/containers/9b3ce4ed0342d4871460a495dedc7477f44ca38fd9d970083df254348c48e8ce/json HTTP/1.1" 200 None Stderr:time="2025-04-02T02:26:48Z" level=trace msg="Docker Desktop integration not enabled" Stderr: Network roottestfilecluster-gw2_default Creating Stderr: Network roottestfilecluster-gw2_default Created Stderr: Container roottestfilecluster-gw2-zoo1-1 Creating Stderr: Container roottestfilecluster-gw2-zoo2-1 Creating Stderr: Container roottestfilecluster-gw2-zoo3-1 Creating Stderr: Container roottestfilecluster-gw2-zoo2-1 Created Stderr: Container roottestfilecluster-gw2-zoo1-1 Created Stderr: Container roottestfilecluster-gw2-zoo3-1 Created Stderr: Container roottestfilecluster-gw2-zoo3-1 Starting Stderr: Container roottestfilecluster-gw2-zoo1-1 Starting Stderr: Container roottestfilecluster-gw2-zoo2-1 Starting Stderr: Container roottestfilecluster-gw2-zoo3-1 Started Stderr: Container roottestfilecluster-gw2-zoo2-1 Started Stderr: Container roottestfilecluster-gw2-zoo1-1 Started Stderr:time="2025-04-02T02:26:49Z" level=debug msg="otel error" error="" Stderr:time="2025-04-02T02:26:49Z" level=debug msg="otel error" error="" Wait ZooKeeper to start get_instance_ip instance_name=zoo1 http://localhost:None "GET /v1.46/containers/roottestfilecluster-gw2-zoo1-1/json HTTP/1.1" 200 None get_kazoo_client: zoo1, ip:172.16.5.4, port:2181, use_ssl:False Connecting to 172.16.5.4(172.16.5.4):2181, use_ssl: False Connection dropped: socket connection error: Connection refused http://localhost:None "GET /v1.46/containers/2d319a09bbc3215bd4cea587c1858b725acde83d0a214e875f58ef78959edaac/json HTTP/1.1" 200 None http://localhost:None "GET /v1.46/containers/accd6ee7a3ca9d42577dc02324d0f1e7ef920ba1ce52c159f15684101294e823/json HTTP/1.1" 200 None ClickHouse instance started Executing query CREATE DATABASE test on instance http://localhost:None "GET /v1.46/containers/407ae9f96c12167ecfb8e66df71d33e9a8f7d012ad4b8091594884e3cabf56a8/json HTTP/1.1" 200 None http://localhost:None "GET /v1.46/containers/9b3ce4ed0342d4871460a495dedc7477f44ca38fd9d970083df254348c48e8ce/json HTTP/1.1" 200 None http://localhost:None "GET /v1.46/containers/2d319a09bbc3215bd4cea587c1858b725acde83d0a214e875f58ef78959edaac/json HTTP/1.1" 200 None Connecting to 172.16.5.4(172.16.5.4):2181, use_ssl: False Connection dropped: socket connection error: Connection refused http://localhost:None "GET /v1.46/containers/407ae9f96c12167ecfb8e66df71d33e9a8f7d012ad4b8091594884e3cabf56a8/json HTTP/1.1" 200 None http://localhost:None "GET /v1.46/containers/9b3ce4ed0342d4871460a495dedc7477f44ca38fd9d970083df254348c48e8ce/json HTTP/1.1" 200 None http://localhost:None "GET /v1.46/containers/2d319a09bbc3215bd4cea587c1858b725acde83d0a214e875f58ef78959edaac/json HTTP/1.1" 200 None http://localhost:None "GET /v1.46/containers/407ae9f96c12167ecfb8e66df71d33e9a8f7d012ad4b8091594884e3cabf56a8/json HTTP/1.1" 200 None http://localhost:None "GET /v1.46/containers/9b3ce4ed0342d4871460a495dedc7477f44ca38fd9d970083df254348c48e8ce/json HTTP/1.1" 200 None Executing query DROP TABLE IF EXISTS test.graphite; CREATE TABLE test.graphite (metric String, value Float64, timestamp UInt32, date Date, updated UInt32) ENGINE = GraphiteMergeTree('graphite_rollup') PARTITION BY toYYYYMM(date) ORDER BY (metric, timestamp) SETTINGS index_granularity=8192; on instance http://localhost:None "GET /v1.46/containers/2d319a09bbc3215bd4cea587c1858b725acde83d0a214e875f58ef78959edaac/json HTTP/1.1" 200 None Connecting to 172.16.5.4(172.16.5.4):2181, use_ssl: False Connection dropped: socket connection error: Connection refused http://localhost:None "GET /v1.46/containers/407ae9f96c12167ecfb8e66df71d33e9a8f7d012ad4b8091594884e3cabf56a8/json HTTP/1.1" 200 None http://localhost:None "GET /v1.46/containers/9b3ce4ed0342d4871460a495dedc7477f44ca38fd9d970083df254348c48e8ce/json HTTP/1.1" 200 None ClickHouse node started Executing query CREATE TABLE data_least_used (p UInt8) ENGINE = MergeTree ORDER BY tuple() SETTINGS storage_policy = 'jbod_least_used'; SYSTEM STOP MERGES data_least_used; INSERT INTO data_least_used SELECT * FROM numbers(10); INSERT INTO data_least_used SELECT * FROM numbers(10); INSERT INTO data_least_used SELECT * FROM numbers(10); INSERT INTO data_least_used SELECT * FROM numbers(10); on node http://localhost:None "GET /v1.46/containers/2d319a09bbc3215bd4cea587c1858b725acde83d0a214e875f58ef78959edaac/json HTTP/1.1" 200 None http://localhost:None "GET /v1.46/containers/407ae9f96c12167ecfb8e66df71d33e9a8f7d012ad4b8091594884e3cabf56a8/json HTTP/1.1" 200 None http://localhost:None "GET /v1.46/containers/2d319a09bbc3215bd4cea587c1858b725acde83d0a214e875f58ef78959edaac/json HTTP/1.1" 200 None http://localhost:None "GET /v1.46/containers/407ae9f96c12167ecfb8e66df71d33e9a8f7d012ad4b8091594884e3cabf56a8/json HTTP/1.1" 200 None Stderr: Network roottestformatschemaonserver-gw6_default Creating Stderr: Network roottestformatschemaonserver-gw6_default Created Stderr: Container roottestformatschemaonserver-gw6-instance-1 Creating Stderr: Container roottestformatschemaonserver-gw6-instance-1 Created Stderr: Container roottestformatschemaonserver-gw6-instance-1 Starting Stderr: Container roottestformatschemaonserver-gw6-instance-1 Started ClickHouse instance created get_instance_ip instance_name=instance http://localhost:None "GET /v1.46/containers/roottestformatschemaonserver-gw6-instance-1/json HTTP/1.1" 200 None get_instance_ip instance_name=instance http://localhost:None "GET /v1.46/containers/roottestformatschemaonserver-gw6-instance-1/json HTTP/1.1" 200 None Waiting for ClickHouse start in instance, ip: 172.16.7.2... http://localhost:None "GET /v1.46/containers/roottestformatschemaonserver-gw6-instance-1/json HTTP/1.1" 200 None http://localhost:None "GET /v1.46/containers/675c7880b04bc5ffa25f1cd209bf31ae682de51556e49f5ffb3577e9800bbb43/json HTTP/1.1" 200 None Stderr: Network roottestinsertdistributedasyncsend-gw0_default Creating Stderr: Network roottestinsertdistributedasyncsend-gw0_default Created Stderr: Container roottestinsertdistributedasyncsend-gw0-n3-1 Creating Stderr: Container roottestinsertdistributedasyncsend-gw0-n1-1 Creating Stderr: Container roottestinsertdistributedasyncsend-gw0-n4-1 Creating Stderr: Container roottestinsertdistributedasyncsend-gw0-n2-1 Creating Stderr: Container roottestinsertdistributedasyncsend-gw0-n1-1 Created Stderr: Container roottestinsertdistributedasyncsend-gw0-n2-1 Created Stderr: Container roottestinsertdistributedasyncsend-gw0-n4-1 Created Stderr: Container roottestinsertdistributedasyncsend-gw0-n3-1 Created Stderr: Container roottestinsertdistributedasyncsend-gw0-n3-1 Starting Stderr: Container roottestinsertdistributedasyncsend-gw0-n4-1 Starting Stderr: Container roottestinsertdistributedasyncsend-gw0-n2-1 Starting Stderr: Container roottestinsertdistributedasyncsend-gw0-n1-1 Starting Stderr: Container roottestinsertdistributedasyncsend-gw0-n3-1 Started Stderr: Container roottestinsertdistributedasyncsend-gw0-n1-1 Started Stderr: Container roottestinsertdistributedasyncsend-gw0-n2-1 Started Stderr: Container roottestinsertdistributedasyncsend-gw0-n4-1 Started ClickHouse instance created get_instance_ip instance_name=n1 http://localhost:None "GET /v1.46/containers/2d319a09bbc3215bd4cea587c1858b725acde83d0a214e875f58ef78959edaac/json HTTP/1.1" 200 None http://localhost:None "GET /v1.46/containers/roottestinsertdistributedasyncsend-gw0-n1-1/json HTTP/1.1" 200 None get_instance_ip instance_name=n1 http://localhost:None "GET /v1.46/containers/roottestinsertdistributedasyncsend-gw0-n1-1/json HTTP/1.1" 200 None Waiting for ClickHouse start in n1, ip: 172.16.6.3... http://localhost:None "GET /v1.46/containers/roottestinsertdistributedasyncsend-gw0-n1-1/json HTTP/1.1" 200 None http://localhost:None "GET /v1.46/containers/898ab95780e1cdb29cd9860df0e82a31bdb65965a8d674dee5b173e38741d181/json HTTP/1.1" 200 None http://localhost:None "GET /v1.46/containers/407ae9f96c12167ecfb8e66df71d33e9a8f7d012ad4b8091594884e3cabf56a8/json HTTP/1.1" 200 None Executing query DROP TABLE IF EXISTS test.graphite; CREATE TABLE test.graphite (metric String, value Float64, timestamp UInt32, date Date, updated UInt32) ENGINE = GraphiteMergeTree('graphite_rollup_broken') PARTITION BY toYYYYMM(date) ORDER BY (metric, timestamp) SETTINGS index_granularity=1; on instance Connecting to 172.16.5.4(172.16.5.4):2181, use_ssl: False Connection dropped: socket connection error: Connection refused http://localhost:None "GET /v1.46/containers/675c7880b04bc5ffa25f1cd209bf31ae682de51556e49f5ffb3577e9800bbb43/json HTTP/1.1" 200 None http://localhost:None "GET /v1.46/containers/2d319a09bbc3215bd4cea587c1858b725acde83d0a214e875f58ef78959edaac/json HTTP/1.1" 200 None Executing query SELECT count(), disk_name FROM system.parts WHERE table = 'data_least_used' GROUP BY disk_name ORDER BY disk_name on node http://localhost:None "GET /v1.46/containers/898ab95780e1cdb29cd9860df0e82a31bdb65965a8d674dee5b173e38741d181/json HTTP/1.1" 200 None http://localhost:None "GET /v1.46/containers/407ae9f96c12167ecfb8e66df71d33e9a8f7d012ad4b8091594884e3cabf56a8/json HTTP/1.1" 200 None http://localhost:None "GET /v1.46/containers/675c7880b04bc5ffa25f1cd209bf31ae682de51556e49f5ffb3577e9800bbb43/json HTTP/1.1" 200 None http://localhost:None "GET /v1.46/containers/2d319a09bbc3215bd4cea587c1858b725acde83d0a214e875f58ef78959edaac/json HTTP/1.1" 200 None http://localhost:None "GET /v1.46/containers/898ab95780e1cdb29cd9860df0e82a31bdb65965a8d674dee5b173e38741d181/json HTTP/1.1" 200 None http://localhost:None "GET /v1.46/containers/407ae9f96c12167ecfb8e66df71d33e9a8f7d012ad4b8091594884e3cabf56a8/json HTTP/1.1" 200 None ClickHouse node started Stderr: Network roottestexternalhttpauthenticator-gw7_default Creating Stderr: Network roottestexternalhttpauthenticator-gw7_default Created Stderr: Container roottestexternalhttpauthenticator-gw7-node-1 Creating Stderr: Container roottestexternalhttpauthenticator-gw7-node-1 Created Stderr: Container roottestexternalhttpauthenticator-gw7-node-1 Starting Stderr: Container roottestexternalhttpauthenticator-gw7-node-1 Started ClickHouse instance created get_instance_ip instance_name=node http://localhost:None "GET /v1.46/containers/roottestexternalhttpauthenticator-gw7-node-1/json HTTP/1.1" 200 None get_instance_ip instance_name=node http://localhost:None "GET /v1.46/containers/roottestexternalhttpauthenticator-gw7-node-1/json HTTP/1.1" 200 None Waiting for ClickHouse start in node, ip: 172.16.9.2... http://localhost:None "GET /v1.46/containers/roottestexternalhttpauthenticator-gw7-node-1/json HTTP/1.1" 200 None http://localhost:None "GET /v1.46/containers/675c7880b04bc5ffa25f1cd209bf31ae682de51556e49f5ffb3577e9800bbb43/json HTTP/1.1" 200 None http://localhost:None "GET /v1.46/containers/2d319a09bbc3215bd4cea587c1858b725acde83d0a214e875f58ef78959edaac/json HTTP/1.1" 200 None http://localhost:None "GET /v1.46/containers/d3c30dc1d0ab04c6285af8f5a0cda41b94ef365f4965b79daa17731185931fc5/json HTTP/1.1" 200 None http://localhost:None "GET /v1.46/containers/898ab95780e1cdb29cd9860df0e82a31bdb65965a8d674dee5b173e38741d181/json HTTP/1.1" 200 None http://localhost:None "GET /v1.46/containers/675c7880b04bc5ffa25f1cd209bf31ae682de51556e49f5ffb3577e9800bbb43/json HTTP/1.1" 200 None http://localhost:None "GET /v1.46/containers/d3c30dc1d0ab04c6285af8f5a0cda41b94ef365f4965b79daa17731185931fc5/json HTTP/1.1" 200 None http://localhost:None "GET /v1.46/containers/2d319a09bbc3215bd4cea587c1858b725acde83d0a214e875f58ef78959edaac/json HTTP/1.1" 200 None http://localhost:None "GET /v1.46/containers/898ab95780e1cdb29cd9860df0e82a31bdb65965a8d674dee5b173e38741d181/json HTTP/1.1" 200 None Executing query INSERT INTO test.graphite FORMAT TSV on instance Connecting to 172.16.5.4(172.16.5.4):2181, use_ssl: False Connection dropped: socket connection error: Connection refused http://localhost:None "GET /v1.46/containers/675c7880b04bc5ffa25f1cd209bf31ae682de51556e49f5ffb3577e9800bbb43/json HTTP/1.1" 200 None Executing query DROP TABLE IF EXISTS data_least_used SYNC on node http://localhost:None "GET /v1.46/containers/2d319a09bbc3215bd4cea587c1858b725acde83d0a214e875f58ef78959edaac/json HTTP/1.1" 200 None http://localhost:None "GET /v1.46/containers/d3c30dc1d0ab04c6285af8f5a0cda41b94ef365f4965b79daa17731185931fc5/json HTTP/1.1" 200 None http://localhost:None "GET /v1.46/containers/898ab95780e1cdb29cd9860df0e82a31bdb65965a8d674dee5b173e38741d181/json HTTP/1.1" 200 None http://localhost:None "GET /v1.46/containers/675c7880b04bc5ffa25f1cd209bf31ae682de51556e49f5ffb3577e9800bbb43/json HTTP/1.1" 200 None http://localhost:None "GET /v1.46/containers/2d319a09bbc3215bd4cea587c1858b725acde83d0a214e875f58ef78959edaac/json HTTP/1.1" 200 None http://localhost:None "GET /v1.46/containers/d3c30dc1d0ab04c6285af8f5a0cda41b94ef365f4965b79daa17731185931fc5/json HTTP/1.1" 200 None http://localhost:None "GET /v1.46/containers/898ab95780e1cdb29cd9860df0e82a31bdb65965a8d674dee5b173e38741d181/json HTTP/1.1" 200 None http://localhost:None "GET /v1.46/containers/675c7880b04bc5ffa25f1cd209bf31ae682de51556e49f5ffb3577e9800bbb43/json HTTP/1.1" 200 None http://localhost:None "GET /v1.46/containers/2d319a09bbc3215bd4cea587c1858b725acde83d0a214e875f58ef78959edaac/json HTTP/1.1" 200 None ClickHouse node started get_instance_ip instance_name=node_1 http://localhost:None "GET /v1.46/containers/d3c30dc1d0ab04c6285af8f5a0cda41b94ef365f4965b79daa17731185931fc5/json HTTP/1.1" 200 None http://localhost:None "GET /v1.46/containers/898ab95780e1cdb29cd9860df0e82a31bdb65965a8d674dee5b173e38741d181/json HTTP/1.1" 200 None http://localhost:None "GET /v1.46/containers/roottesthedgedrequestsparallel-gw3-node_1-1/json HTTP/1.1" 200 None get_instance_ip instance_name=node_1 http://localhost:None "GET /v1.46/containers/roottesthedgedrequestsparallel-gw3-node_1-1/json HTTP/1.1" 200 None Waiting for ClickHouse start in node_1, ip: 172.16.4.3... http://localhost:None "GET /v1.46/containers/roottesthedgedrequestsparallel-gw3-node_1-1/json HTTP/1.1" 200 None http://localhost:None "GET /v1.46/containers/ad676b944a9772651e590b04a97a645a2d1af68ba0fbd32256be480f7d1adcb9/json HTTP/1.1" 200 None ClickHouse node_1 started get_instance_ip instance_name=node_2 http://localhost:None "GET /v1.46/containers/roottesthedgedrequestsparallel-gw3-node_2-1/json HTTP/1.1" 200 None get_instance_ip instance_name=node_2 http://localhost:None "GET /v1.46/containers/roottesthedgedrequestsparallel-gw3-node_2-1/json HTTP/1.1" 200 None Waiting for ClickHouse start in node_2, ip: 172.16.4.5... http://localhost:None "GET /v1.46/containers/roottesthedgedrequestsparallel-gw3-node_2-1/json HTTP/1.1" 200 None http://localhost:None "GET /v1.46/containers/b5346a85544a45ccc4917aaecc8f365261cee03330f0c2a89604626c4b3362fa/json HTTP/1.1" 200 None ClickHouse node_2 started get_instance_ip instance_name=node_3 http://localhost:None "GET /v1.46/containers/roottesthedgedrequestsparallel-gw3-node_3-1/json HTTP/1.1" 200 None get_instance_ip instance_name=node_3 http://localhost:None "GET /v1.46/containers/roottesthedgedrequestsparallel-gw3-node_3-1/json HTTP/1.1" 200 None Waiting for ClickHouse start in node_3, ip: 172.16.4.4... http://localhost:None "GET /v1.46/containers/roottesthedgedrequestsparallel-gw3-node_3-1/json HTTP/1.1" 200 None http://localhost:None "GET /v1.46/containers/c0f45129d391b845287583589b32cc8e8b06ca8e21198a3fe0798baffa25c2b1/json HTTP/1.1" 200 None ClickHouse node_3 started get_instance_ip instance_name=node_4 http://localhost:None "GET /v1.46/containers/roottesthedgedrequestsparallel-gw3-node_4-1/json HTTP/1.1" 200 None get_instance_ip instance_name=node_4 http://localhost:None "GET /v1.46/containers/roottesthedgedrequestsparallel-gw3-node_4-1/json HTTP/1.1" 200 None Waiting for ClickHouse start in node_4, ip: 172.16.4.2... http://localhost:None "GET /v1.46/containers/roottesthedgedrequestsparallel-gw3-node_4-1/json HTTP/1.1" 200 None http://localhost:None "GET /v1.46/containers/bb3ffb7bccf60c7206f882df2455c28d9c006c2a689b784a5cb35f7e3b252e95/json HTTP/1.1" 200 None ClickHouse node_4 started Executing query CREATE TABLE test_hedged (id UInt32, date Date) ENGINE = MergeTree() ORDER BY id PARTITION BY toYYYYMM(date) on node_1 Stderr:time="2025-04-02T02:26:48Z" level=trace msg="Docker Desktop integration not enabled" Stderr: Network roottestkeeperclient-gw4_default Creating Stderr: Network roottestkeeperclient-gw4_default Created Stderr: Container roottestkeeperclient-gw4-zoo3-1 Creating Stderr: Container roottestkeeperclient-gw4-zoo1-1 Creating Stderr: Container roottestkeeperclient-gw4-zoo2-1 Creating Stderr: Container roottestkeeperclient-gw4-zoo1-1 Created Stderr: Container roottestkeeperclient-gw4-zoo3-1 Created Stderr: Container roottestkeeperclient-gw4-zoo2-1 Created Stderr: Container roottestkeeperclient-gw4-zoo3-1 Starting Stderr: Container roottestkeeperclient-gw4-zoo1-1 Starting Stderr: Container roottestkeeperclient-gw4-zoo2-1 Starting http://localhost:None "GET /v1.46/containers/675c7880b04bc5ffa25f1cd209bf31ae682de51556e49f5ffb3577e9800bbb43/json HTTP/1.1" 200 None Stderr: Container roottestkeeperclient-gw4-zoo1-1 Started Stderr: Container roottestkeeperclient-gw4-zoo3-1 Started Stderr: Container roottestkeeperclient-gw4-zoo2-1 Started Stderr:time="2025-04-02T02:26:51Z" level=debug msg="otel error" error="" Stderr:time="2025-04-02T02:26:51Z" level=debug msg="otel error" error="" Wait ZooKeeper to start get_instance_ip instance_name=zoo1 http://localhost:None "GET /v1.46/containers/d3c30dc1d0ab04c6285af8f5a0cda41b94ef365f4965b79daa17731185931fc5/json HTTP/1.1" 200 None http://localhost:None "GET /v1.46/containers/roottestkeeperclient-gw4-zoo1-1/json HTTP/1.1" 200 None get_kazoo_client: zoo1, ip:172.16.10.2, port:2181, use_ssl:False http://localhost:None "GET /v1.46/containers/898ab95780e1cdb29cd9860df0e82a31bdb65965a8d674dee5b173e38741d181/json HTTP/1.1" 200 None Connecting to 172.16.10.2(172.16.10.2):2181, use_ssl: False Connection dropped: socket connection error: Connection refused Connecting to 172.16.10.2(172.16.10.2):2181, use_ssl: False Connection dropped: socket connection error: Connection refused http://localhost:None "GET /v1.46/containers/675c7880b04bc5ffa25f1cd209bf31ae682de51556e49f5ffb3577e9800bbb43/json HTTP/1.1" 200 None http://localhost:None "GET /v1.46/containers/d3c30dc1d0ab04c6285af8f5a0cda41b94ef365f4965b79daa17731185931fc5/json HTTP/1.1" 200 None http://localhost:None "GET /v1.46/containers/898ab95780e1cdb29cd9860df0e82a31bdb65965a8d674dee5b173e38741d181/json HTTP/1.1" 200 None Executing query CREATE TABLE data_least_used_detect_background_changes (p UInt8) ENGINE = MergeTree ORDER BY tuple() SETTINGS storage_policy = 'jbod_least_used'; SYSTEM STOP MERGES data_least_used_detect_background_changes; on node [gw8] PASSED test_jbod_load_balancing/test.py::test_jbod_load_balancing_least_used Executing query OPTIMIZE TABLE test.graphite PARTITION 200109 FINAL; SELECT * FROM test.graphite; on instance test_jbod_load_balancing/test.py::test_jbod_load_balancing_least_used_detect_background_changes Stderr:time="2025-04-02T02:26:48Z" level=trace msg="Docker Desktop integration not enabled" Stderr: Network roottestencrypteddisk-gw1_default Creating Stderr: Network roottestencrypteddisk-gw1_default Created Stderr: Volume "roottestencrypteddisk-gw1_data1-1" Creating Stderr: Volume "roottestencrypteddisk-gw1_data1-1" Created Stderr: Container roottestencrypteddisk-gw1-proxy2-1 Creating Stderr: Container roottestencrypteddisk-gw1-proxy1-1 Creating Stderr: Container roottestencrypteddisk-gw1-proxy1-1 Created Stderr: Container roottestencrypteddisk-gw1-proxy2-1 Created Stderr: Container roottestencrypteddisk-gw1-minio1-1 Creating Stderr: Container roottestencrypteddisk-gw1-resolver-1 Creating Stderr: Container roottestencrypteddisk-gw1-resolver-1 Created Stderr: Container roottestencrypteddisk-gw1-minio1-1 Created Stderr: Container roottestencrypteddisk-gw1-proxy1-1 Starting Stderr: Container roottestencrypteddisk-gw1-proxy2-1 Starting Stderr: Container roottestencrypteddisk-gw1-proxy1-1 Started Stderr: Container roottestencrypteddisk-gw1-proxy2-1 Started Stderr: Container roottestencrypteddisk-gw1-minio1-1 Starting Stderr: Container roottestencrypteddisk-gw1-resolver-1 Starting Stderr: Container roottestencrypteddisk-gw1-resolver-1 Started Stderr: Container roottestencrypteddisk-gw1-minio1-1 Started Stderr:time="2025-04-02T02:26:51Z" level=debug msg="otel error" error="" Stderr:time="2025-04-02T02:26:51Z" level=debug msg="otel error" error="" Trying to connect to Minio... get_instance_ip instance_name=minio1 http://localhost:None "GET /v1.46/containers/roottestencrypteddisk-gw1-minio1-1/json HTTP/1.1" 200 None get_instance_ip instance_name=proxy1 http://localhost:None "GET /v1.46/containers/675c7880b04bc5ffa25f1cd209bf31ae682de51556e49f5ffb3577e9800bbb43/json HTTP/1.1" 200 None http://localhost:None "GET /v1.46/containers/roottestencrypteddisk-gw1-proxy1-1/json HTTP/1.1" 200 None Starting new HTTP connection (1): 172.16.8.5:9001 Incremented Retry for (url='/'): Retry(total=2, connect=None, read=None, redirect=None, status=None) Retrying (Retry(total=2, connect=None, read=None, redirect=None, status=None)) after connection broken by 'NewConnectionError(': Failed to establish a new connection: [Errno 111] Connection refused')': / Starting new HTTP connection (2): 172.16.8.5:9001 http://localhost:None "GET /v1.46/containers/d3c30dc1d0ab04c6285af8f5a0cda41b94ef365f4965b79daa17731185931fc5/json HTTP/1.1" 200 None Incremented Retry for (url='/'): Retry(total=1, connect=None, read=None, redirect=None, status=None) Retrying (Retry(total=1, connect=None, read=None, redirect=None, status=None)) after connection broken by 'NewConnectionError(': Failed to establish a new connection: [Errno 111] Connection refused')': / Starting new HTTP connection (3): 172.16.8.5:9001 Incremented Retry for (url='/'): Retry(total=0, connect=None, read=None, redirect=None, status=None) Retrying (Retry(total=0, connect=None, read=None, redirect=None, status=None)) after connection broken by 'NewConnectionError(': Failed to establish a new connection: [Errno 111] Connection refused')': / Starting new HTTP connection (4): 172.16.8.5:9001 Can't connect to Minio: HTTPConnectionPool(host='172.16.8.5', port=9001): Max retries exceeded with url: / (Caused by NewConnectionError(': Failed to establish a new connection: [Errno 111] Connection refused')) http://localhost:None "GET /v1.46/containers/898ab95780e1cdb29cd9860df0e82a31bdb65965a8d674dee5b173e38741d181/json HTTP/1.1" 200 None Executing query INSERT INTO test_hedged SELECT number, toDateTime(number) FROM numbers(100) on node_1 http://localhost:None "GET /v1.46/containers/675c7880b04bc5ffa25f1cd209bf31ae682de51556e49f5ffb3577e9800bbb43/json HTTP/1.1" 200 None http://localhost:None "GET /v1.46/containers/d3c30dc1d0ab04c6285af8f5a0cda41b94ef365f4965b79daa17731185931fc5/json HTTP/1.1" 200 None Connecting to 172.16.10.2(172.16.10.2):2181, use_ssl: False http://localhost:None "GET /v1.46/containers/898ab95780e1cdb29cd9860df0e82a31bdb65965a8d674dee5b173e38741d181/json HTTP/1.1" 200 None Connection dropped: socket connection error: Connection refused ClickHouse n1 started get_instance_ip instance_name=n2 http://localhost:None "GET /v1.46/containers/roottestinsertdistributedasyncsend-gw0-n2-1/json HTTP/1.1" 200 None get_instance_ip instance_name=n2 http://localhost:None "GET /v1.46/containers/roottestinsertdistributedasyncsend-gw0-n2-1/json HTTP/1.1" 200 None Waiting for ClickHouse start in n2, ip: 172.16.6.4... http://localhost:None "GET /v1.46/containers/roottestinsertdistributedasyncsend-gw0-n2-1/json HTTP/1.1" 200 None http://localhost:None "GET /v1.46/containers/9871db20e991f05e01138dc134e543454b884d5e6aebc553bc3e7bd9f21e161c/json HTTP/1.1" 200 None run container_id:roottestjbodloadbalancing-gw8-node-1 detach:False nothrow:False cmd: ['fallocate', '-l200M', '/jbod3/.test'] Command:[docker exec roottestjbodloadbalancing-gw8-node-1 fallocate -l200M /jbod3/.test] Executing query DROP TABLE test.graphite on instance http://localhost:None "GET /v1.46/containers/675c7880b04bc5ffa25f1cd209bf31ae682de51556e49f5ffb3577e9800bbb43/json HTTP/1.1" 200 None [gw5] PASSED test_graphite_merge_tree/test.py::test_broken_partial_rollup http://localhost:None "GET /v1.46/containers/d3c30dc1d0ab04c6285af8f5a0cda41b94ef365f4965b79daa17731185931fc5/json HTTP/1.1" 200 None http://localhost:None "GET /v1.46/containers/9871db20e991f05e01138dc134e543454b884d5e6aebc553bc3e7bd9f21e161c/json HTTP/1.1" 200 None Executing query INSERT INTO data_least_used_detect_background_changes SELECT * FROM numbers(10); INSERT INTO data_least_used_detect_background_changes SELECT * FROM numbers(10); INSERT INTO data_least_used_detect_background_changes SELECT * FROM numbers(10); INSERT INTO data_least_used_detect_background_changes SELECT * FROM numbers(10); on node Executing query CREATE TABLE test_hedged (id UInt32, date Date) ENGINE = MergeTree() ORDER BY id PARTITION BY toYYYYMM(date) on node_2 http://localhost:None "GET /v1.46/containers/675c7880b04bc5ffa25f1cd209bf31ae682de51556e49f5ffb3577e9800bbb43/json HTTP/1.1" 200 None http://localhost:None "GET /v1.46/containers/d3c30dc1d0ab04c6285af8f5a0cda41b94ef365f4965b79daa17731185931fc5/json HTTP/1.1" 200 None http://localhost:None "GET /v1.46/containers/9871db20e991f05e01138dc134e543454b884d5e6aebc553bc3e7bd9f21e161c/json HTTP/1.1" 200 None ClickHouse n2 started get_instance_ip instance_name=n3 http://localhost:None "GET /v1.46/containers/roottestinsertdistributedasyncsend-gw0-n3-1/json HTTP/1.1" 200 None get_instance_ip instance_name=n3 http://localhost:None "GET /v1.46/containers/roottestinsertdistributedasyncsend-gw0-n3-1/json HTTP/1.1" 200 None Waiting for ClickHouse start in n3, ip: 172.16.6.2... http://localhost:None "GET /v1.46/containers/roottestinsertdistributedasyncsend-gw0-n3-1/json HTTP/1.1" 200 None http://localhost:None "GET /v1.46/containers/40d4b64bacc0a27a57dae2c75186e166991046c90c39af2f11ea4a74e551033e/json HTTP/1.1" 200 None ClickHouse n3 started get_instance_ip instance_name=n4 http://localhost:None "GET /v1.46/containers/roottestinsertdistributedasyncsend-gw0-n4-1/json HTTP/1.1" 200 None get_instance_ip instance_name=n4 http://localhost:None "GET /v1.46/containers/roottestinsertdistributedasyncsend-gw0-n4-1/json HTTP/1.1" 200 None Waiting for ClickHouse start in n4, ip: 172.16.6.5... http://localhost:None "GET /v1.46/containers/roottestinsertdistributedasyncsend-gw0-n4-1/json HTTP/1.1" 200 None http://localhost:None "GET /v1.46/containers/45cc14b0aa8a78c4ae4be0bd02cc304d15f507da988d1b2e0abf8745b7636105/json HTTP/1.1" 200 None http://localhost:None "GET /v1.46/containers/675c7880b04bc5ffa25f1cd209bf31ae682de51556e49f5ffb3577e9800bbb43/json HTTP/1.1" 200 None http://localhost:None "GET /v1.46/containers/d3c30dc1d0ab04c6285af8f5a0cda41b94ef365f4965b79daa17731185931fc5/json HTTP/1.1" 200 None test_graphite_merge_tree/test.py::test_combined_rules Executing query DROP TABLE IF EXISTS test.graphite; CREATE TABLE test.graphite (metric String, value Float64, timestamp UInt32, date Date, updated UInt32) ENGINE = GraphiteMergeTree('graphite_rollup') PARTITION BY toYYYYMM(date) ORDER BY (metric, timestamp) SETTINGS index_granularity=8192; on instance http://localhost:None "GET /v1.46/containers/45cc14b0aa8a78c4ae4be0bd02cc304d15f507da988d1b2e0abf8745b7636105/json HTTP/1.1" 200 None Connecting to 172.16.10.2(172.16.10.2):2181, use_ssl: False Connection dropped: socket connection error: Connection refused Executing query INSERT INTO test_hedged SELECT number, toDateTime(number) FROM numbers(100) on node_2 Executing query SELECT count(), disk_name FROM system.parts WHERE table = 'data_least_used_detect_background_changes' GROUP BY disk_name ORDER BY disk_name on node http://localhost:None "GET /v1.46/containers/675c7880b04bc5ffa25f1cd209bf31ae682de51556e49f5ffb3577e9800bbb43/json HTTP/1.1" 200 None ClickHouse instance started Executing query CREATE DATABASE test on instance http://localhost:None "GET /v1.46/containers/d3c30dc1d0ab04c6285af8f5a0cda41b94ef365f4965b79daa17731185931fc5/json HTTP/1.1" 200 None http://localhost:None "GET /v1.46/containers/45cc14b0aa8a78c4ae4be0bd02cc304d15f507da988d1b2e0abf8745b7636105/json HTTP/1.1" 200 None ClickHouse n4 started Executing query DROP TABLE IF EXISTS data on n1 Connecting to 172.16.5.4(172.16.5.4):2181, use_ssl: False Connection dropped: socket connection error: Connection refused http://localhost:None "GET /v1.46/containers/d3c30dc1d0ab04c6285af8f5a0cda41b94ef365f4965b79daa17731185931fc5/json HTTP/1.1" 200 None ClickHouse node started run container_id:roottestexternalhttpauthenticator-gw7-node-1 detach:False nothrow:False cmd: ['bash', '-c', 'mkdir -p $(dirname /http_auth_server.py) && echo aW1wb3J0IGJhc2U2NAppbXBvcnQgaHR0cC5zZXJ2ZXIKaW1wb3J0IGpzb24KCkdPT0RfUEFTU1dPUkQgPSAiZ29vZF9wYXNzd29yZCIKVVNFUl9SRVNQT05TRVMgPSB7CiAgICAidGVzdF91c2VyXzEiOiB7InNldHRpbmdzIjogeyJhdXRoX3VzZXIiOiAiJ3Rlc3RfdXNlciciLCAiYXV0aF9udW0iOiAiVUludDY0XzE1In19LAogICAgInRlc3RfdXNlcl8yIjoge30sCiAgICAidGVzdF91c2VyXzMiOiAiIiwKICAgICJ0ZXN0X3VzZXJfNCI6ICJub3QganNvbiBzdHJpbmciLAp9CgoKY2xhc3MgUmVxdWVzdEhhbmRsZXIoaHR0cC5zZXJ2ZXIuQmFzZUhUVFBSZXF1ZXN0SGFuZGxlcik6CiAgICBkZWYgZGVjb2RlX2Jhc2ljKHNlbGYsIGRhdGEpOgogICAgICAgIGRlY29kZWRfZGF0YSA9IGJhc2U2NC5iNjRkZWNvZGUoZGF0YSkuZGVjb2RlKCJ1dGYtOCIpCiAgICAgICAgcmV0dXJuIGRlY29kZWRfZGF0YS5zcGxpdCgiOiIsIDEpCgogICAgZGVmIGRvX0FVVEhIRUFEKHNlbGYpOgogICAgICAgIHNlbGYuc2VuZF9yZXNwb25zZShodHRwLkhUVFBTdGF0dXMuVU5BVVRIT1JJWkVEKQogICAgICAgIHNlbGYuc2VuZF9oZWFkZXIoIldXVy1BdXRoZW50aWNhdGUiLCAnQmFzaWMgcmVhbG09IlRlc3QiJykKICAgICAgICBzZWxmLnNlbmRfaGVhZGVyKCJDb250ZW50LXR5cGUiLCAidGV4dC9odG1sIikKICAgICAgICBzZWxmLmVuZF9oZWFkZXJzKCkKCiAgICBkZWYgZG9fQUNDRVNTX0dSQU5URUQoc2VsZiwgdXNlcjogc3RyKSAtPiBOb25lOgogICAgICAgIHNlbGYuc2VuZF9yZXNwb25zZShodHRwLkhUVFBTdGF0dXMuT0spCiAgICAgICAgYm9keSA9ICIiCgogICAgICAgIHJlc3BvbnNlID0gVVNFUl9SRVNQT05TRVMuZ2V0KHVzZXIpCgogICAgICAgIGlmIGlzaW5zdGFuY2UocmVzcG9uc2UsIGRpY3QpOgogICAgICAgICAgICBib2R5ID0ganNvbi5kdW1wcyhyZXNwb25zZSkKICAgICAgICBlbHNlOgogICAgICAgICAgICBib2R5ID0gcmVzcG9uc2Ugb3IgIiIKCiAgICAgICAgYm9keV9yYXcgPSBib2R5LmVuY29kZSgidXRmLTgiKQogICAgICAgIHNlbGYuc2VuZF9oZWFkZXIoIkNvbnRlbnQtVHlwZSIsICJhcHBsaWNhdGlvbi9qc29uIikKICAgICAgICBzZWxmLnNlbmRfaGVhZGVyKCJDb250ZW50LUxlbmd0aCIsIGxlbihib2R5X3JhdykpCiAgICAgICAgc2VsZi5lbmRfaGVhZGVycygpCiAgICAgICAgc2VsZi53ZmlsZS53cml0ZShib2R5X3JhdykKCiAgICBkZWYgZG9fR0VUKHNlbGYpOgogICAgICAgIGlmIHNlbGYucGF0aCA9PSAiL2hlYWx0aCI6CiAgICAgICAgICAgIHNlbGYuc2VuZF9yZXNwb25zZShodHRwLkhUVFBTdGF0dXMuT0spCiAgICAgICAgICAgIHNlbGYuc2VuZF9oZWFkZXIoIkNvbnRlbnQtVHlwZSIsICJ0ZXh0L3BsYWluIikKICAgICAgICAgICAgc2VsZi5lbmRfaGVhZGVycygpCiAgICAgICAgICAgIHNlbGYud2ZpbGUud3JpdGUoYiJPSyIpCgogICAgICAgIGVsaWYgc2VsZi5wYXRoID09ICIvYmFzaWNfYXV0aCI6CiAgICAgICAgICAgIGF1dGhfaGVhZGVyID0gc2VsZi5oZWFkZXJzLmdldCgiQXV0aG9yaXphdGlvbiIpCgogICAgICAgICAgICBpZiBhdXRoX2hlYWRlciBpcyBOb25lOgogICAgICAgICAgICAgICAgc2VsZi5kb19BVVRISEVBRCgpCiAgICAgICAgICAgICAgICByZXR1cm4KCiAgICAgICAgICAgIGF1dGhfc2NoZW1lLCBkYXRhID0gYXV0aF9oZWFkZXIuc3BsaXQoIiAiLCAxKQoKICAgICAgICAgICAgaWYgYXV0aF9zY2hlbWUgIT0gIkJhc2ljIjoKICAgICAgICAgICAgICAgIHByaW50KGF1dGhfc2NoZW1lKQogICAgICAgICAgICAgICAgc2VsZi5kb19BVVRISEVBRCgpCiAgICAgICAgICAgICAgICByZXR1cm4KCiAgICAgICAgICAgIHVzZXJfbmFtZSwgcGFzc3dvcmQgPSBzZWxmLmRlY29kZV9iYXNpYyhkYXRhKQogICAgICAgICAgICBpZiBwYXNzd29yZCA9PSBHT09EX1BBU1NXT1JEOgogICAgICAgICAgICAgICAgc2VsZi5kb19BQ0NFU1NfR1JBTlRFRCh1c2VyX25hbWUpCiAgICAgICAgICAgIGVsc2U6CiAgICAgICAgICAgICAgICBzZWxmLmRvX0FVVEhIRUFEKCkKCgppZiBfX25hbWVfXyA9PSAiX19tYWluX18iOgogICAgaHR0cGQgPSBodHRwLnNlcnZlci5IVFRQU2VydmVyKAogICAgICAgICgKICAgICAgICAgICAgIjAuMC4wLjAiLAogICAgICAgICAgICA4MDAwLAogICAgICAgICksCiAgICAgICAgUmVxdWVzdEhhbmRsZXIsCiAgICApCiAgICB0cnk6CiAgICAgICAgaHR0cGQuc2VydmVfZm9yZXZlcigpCiAgICBmaW5hbGx5OgogICAgICAgIGh0dHBkLnNlcnZlcl9jbG9zZSgpCg== | base64 --decode > /http_auth_server.py'] Command:[docker exec roottestexternalhttpauthenticator-gw7-node-1 bash -c mkdir -p $(dirname /http_auth_server.py) && echo aW1wb3J0IGJhc2U2NAppbXBvcnQgaHR0cC5zZXJ2ZXIKaW1wb3J0IGpzb24KCkdPT0RfUEFTU1dPUkQgPSAiZ29vZF9wYXNzd29yZCIKVVNFUl9SRVNQT05TRVMgPSB7CiAgICAidGVzdF91c2VyXzEiOiB7InNldHRpbmdzIjogeyJhdXRoX3VzZXIiOiAiJ3Rlc3RfdXNlciciLCAiYXV0aF9udW0iOiAiVUludDY0XzE1In19LAogICAgInRlc3RfdXNlcl8yIjoge30sCiAgICAidGVzdF91c2VyXzMiOiAiIiwKICAgICJ0ZXN0X3VzZXJfNCI6ICJub3QganNvbiBzdHJpbmciLAp9CgoKY2xhc3MgUmVxdWVzdEhhbmRsZXIoaHR0cC5zZXJ2ZXIuQmFzZUhUVFBSZXF1ZXN0SGFuZGxlcik6CiAgICBkZWYgZGVjb2RlX2Jhc2ljKHNlbGYsIGRhdGEpOgogICAgICAgIGRlY29kZWRfZGF0YSA9IGJhc2U2NC5iNjRkZWNvZGUoZGF0YSkuZGVjb2RlKCJ1dGYtOCIpCiAgICAgICAgcmV0dXJuIGRlY29kZWRfZGF0YS5zcGxpdCgiOiIsIDEpCgogICAgZGVmIGRvX0FVVEhIRUFEKHNlbGYpOgogICAgICAgIHNlbGYuc2VuZF9yZXNwb25zZShodHRwLkhUVFBTdGF0dXMuVU5BVVRIT1JJWkVEKQogICAgICAgIHNlbGYuc2VuZF9oZWFkZXIoIldXVy1BdXRoZW50aWNhdGUiLCAnQmFzaWMgcmVhbG09IlRlc3QiJykKICAgICAgICBzZWxmLnNlbmRfaGVhZGVyKCJDb250ZW50LXR5cGUiLCAidGV4dC9odG1sIikKICAgICAgICBzZWxmLmVuZF9oZWFkZXJzKCkKCiAgICBkZWYgZG9fQUNDRVNTX0dSQU5URUQoc2VsZiwgdXNlcjogc3RyKSAtPiBOb25lOgogICAgICAgIHNlbGYuc2VuZF9yZXNwb25zZShodHRwLkhUVFBTdGF0dXMuT0spCiAgICAgICAgYm9keSA9ICIiCgogICAgICAgIHJlc3BvbnNlID0gVVNFUl9SRVNQT05TRVMuZ2V0KHVzZXIpCgogICAgICAgIGlmIGlzaW5zdGFuY2UocmVzcG9uc2UsIGRpY3QpOgogICAgICAgICAgICBib2R5ID0ganNvbi5kdW1wcyhyZXNwb25zZSkKICAgICAgICBlbHNlOgogICAgICAgICAgICBib2R5ID0gcmVzcG9uc2Ugb3IgIiIKCiAgICAgICAgYm9keV9yYXcgPSBib2R5LmVuY29kZSgidXRmLTgiKQogICAgICAgIHNlbGYuc2VuZF9oZWFkZXIoIkNvbnRlbnQtVHlwZSIsICJhcHBsaWNhdGlvbi9qc29uIikKICAgICAgICBzZWxmLnNlbmRfaGVhZGVyKCJDb250ZW50LUxlbmd0aCIsIGxlbihib2R5X3JhdykpCiAgICAgICAgc2VsZi5lbmRfaGVhZGVycygpCiAgICAgICAgc2VsZi53ZmlsZS53cml0ZShib2R5X3JhdykKCiAgICBkZWYgZG9fR0VUKHNlbGYpOgogICAgICAgIGlmIHNlbGYucGF0aCA9PSAiL2hlYWx0aCI6CiAgICAgICAgICAgIHNlbGYuc2VuZF9yZXNwb25zZShodHRwLkhUVFBTdGF0dXMuT0spCiAgICAgICAgICAgIHNlbGYuc2VuZF9oZWFkZXIoIkNvbnRlbnQtVHlwZSIsICJ0ZXh0L3BsYWluIikKICAgICAgICAgICAgc2VsZi5lbmRfaGVhZGVycygpCiAgICAgICAgICAgIHNlbGYud2ZpbGUud3JpdGUoYiJPSyIpCgogICAgICAgIGVsaWYgc2VsZi5wYXRoID09ICIvYmFzaWNfYXV0aCI6CiAgICAgICAgICAgIGF1dGhfaGVhZGVyID0gc2VsZi5oZWFkZXJzLmdldCgiQXV0aG9yaXphdGlvbiIpCgogICAgICAgICAgICBpZiBhdXRoX2hlYWRlciBpcyBOb25lOgogICAgICAgICAgICAgICAgc2VsZi5kb19BVVRISEVBRCgpCiAgICAgICAgICAgICAgICByZXR1cm4KCiAgICAgICAgICAgIGF1dGhfc2NoZW1lLCBkYXRhID0gYXV0aF9oZWFkZXIuc3BsaXQoIiAiLCAxKQoKICAgICAgICAgICAgaWYgYXV0aF9zY2hlbWUgIT0gIkJhc2ljIjoKICAgICAgICAgICAgICAgIHByaW50KGF1dGhfc2NoZW1lKQogICAgICAgICAgICAgICAgc2VsZi5kb19BVVRISEVBRCgpCiAgICAgICAgICAgICAgICByZXR1cm4KCiAgICAgICAgICAgIHVzZXJfbmFtZSwgcGFzc3dvcmQgPSBzZWxmLmRlY29kZV9iYXNpYyhkYXRhKQogICAgICAgICAgICBpZiBwYXNzd29yZCA9PSBHT09EX1BBU1NXT1JEOgogICAgICAgICAgICAgICAgc2VsZi5kb19BQ0NFU1NfR1JBTlRFRCh1c2VyX25hbWUpCiAgICAgICAgICAgIGVsc2U6CiAgICAgICAgICAgICAgICBzZWxmLmRvX0FVVEhIRUFEKCkKCgppZiBfX25hbWVfXyA9PSAiX19tYWluX18iOgogICAgaHR0cGQgPSBodHRwLnNlcnZlci5IVFRQU2VydmVyKAogICAgICAgICgKICAgICAgICAgICAgIjAuMC4wLjAiLAogICAgICAgICAgICA4MDAwLAogICAgICAgICksCiAgICAgICAgUmVxdWVzdEhhbmRsZXIsCiAgICApCiAgICB0cnk6CiAgICAgICAgaHR0cGQuc2VydmVfZm9yZXZlcigpCiAgICBmaW5hbGx5OgogICAgICAgIGh0dHBkLnNlcnZlcl9jbG9zZSgpCg== | base64 --decode > /http_auth_server.py] Executing query INSERT INTO test.graphite VALUES ('five_min.count', 1, 1487970000, toDate(1487970000), 1), ('five_min.max', 0, 1487970000, toDate(1487970000), 1), ('five_min.count', 1, 1487970300, toDate(1487970300), 1), ('five_min.max', 1, 1487970300, toDate(1487970300), 1), ('five_min.count', 1, 1487970600, toDate(1487970600), 1), ('five_min.max', 2, 1487970600, toDate(1487970600), 1), ('five_min.count', 1, 1487970900, toDate(1487970900), 1), ('five_min.max', 3, 1487970900, toDate(1487970900), 1), ('five_min.count', 1, 1487971200, toDate(1487971200), 1), ('five_min.max', 4, 1487971200, toDate(1487971200), 1), ('five_min.count', 1, 1487971500, toDate(1487971500), 1), ('five_min.max', 5, 1487971500, toDate(1487971500), 1), ('five_min.count', 1, 1487971800, toDate(1487971800), 1), ('five_min.max', 6, 1487971800, toDate(1487971800), 1), ('five_min.count', 1, 1487972100, toDate(1487972100), 1), ('five_min.max', 7, 1487972100, toDate(1487972100), 1), ('five_min.count', 1, 1487972400, toDate(1487972400), 1 on instance run container_id:roottestexternalhttpauthenticator-gw7-node-1 detach:True nothrow:False cmd: ['bash', '-c', 'python3 /http_auth_server.py > /http_auth_server.log 2>&1'] Command:[docker exec -u root roottestexternalhttpauthenticator-gw7-node-1 bash -c python3 /http_auth_server.py > /http_auth_server.log 2>&1] run container_id:roottestexternalhttpauthenticator-gw7-node-1 detach:False nothrow:True cmd: ['curl', '-s', 'http://localhost:8000/health'] Command:[docker exec roottestexternalhttpauthenticator-gw7-node-1 curl -s http://localhost:8000/health] Executing query CREATE TABLE test_hedged (id UInt32, date Date) ENGINE = MergeTree() ORDER BY id PARTITION BY toYYYYMM(date) on node_3 run container_id:roottestjbodloadbalancing-gw8-node-1 detach:False nothrow:False cmd: ['rm', '/jbod3/.test'] Command:[docker exec roottestjbodloadbalancing-gw8-node-1 rm /jbod3/.test] Executing query DROP TABLE IF EXISTS test.simple on instance Executing query DROP TABLE IF EXISTS dist on n1 Exitcode:7 Reply1: run container_id:roottestexternalhttpauthenticator-gw7-node-1 detach:False nothrow:True cmd: ['curl', '-s', 'http://localhost:8000/health'] Command:[docker exec roottestexternalhttpauthenticator-gw7-node-1 curl -s http://localhost:8000/health] Executing query INSERT INTO data_least_used_detect_background_changes SELECT * FROM numbers(10); INSERT INTO data_least_used_detect_background_changes SELECT * FROM numbers(10); INSERT INTO data_least_used_detect_background_changes SELECT * FROM numbers(10); INSERT INTO data_least_used_detect_background_changes SELECT * FROM numbers(10); on node Exitcode:7 Reply1: run container_id:roottestexternalhttpauthenticator-gw7-node-1 detach:False nothrow:True cmd: ['curl', '-s', 'http://localhost:8000/health'] Command:[docker exec roottestexternalhttpauthenticator-gw7-node-1 curl -s http://localhost:8000/health] Executing query SELECT metric, value, timestamp FROM test.graphite ORDER BY (timestamp, metric) on instance Stdout:OK Reply1: OK Executing query SELECT currentUser() on node Executing query CREATE TABLE test.simple (key UInt64, value String) ENGINE = MergeTree ORDER BY tuple(); on instance Executing query INSERT INTO test_hedged SELECT number, toDateTime(number) FROM numbers(100) on node_3 Executing query DROP TABLE IF EXISTS data on n2 Starting new HTTP connection (5): 172.16.8.5:9001 http://172.16.8.5:9001 "GET / HTTP/1.1" 200 0 Connected to Minio. http://172.16.8.5:9001 "GET /root?location= HTTP/1.1" 404 0 http://172.16.8.5:9001 "PUT /root HTTP/1.1" 200 0 S3 bucket 'root' created http://172.16.8.5:9001 "GET /root2?location= HTTP/1.1" 404 0 http://172.16.8.5:9001 "PUT /root2 HTTP/1.1" 200 0 S3 bucket 'root2' created ('Trying to create ClickHouse instance by command %s', 'docker compose --env-file /ClickHouse/tests/integration/test_encrypted_disk/_instances-0-gw1/.env --project-name roottestencrypteddisk-gw1 --file /ClickHouse/tests/integration/test_encrypted_disk/_instances-0-gw1/node/docker-compose.yml --file /ClickHouse/tests/integration/helpers/../../../tests/integration/compose/docker_compose_minio.yml up -d --no-recreate') Command:[docker compose --env-file /ClickHouse/tests/integration/test_encrypted_disk/_instances-0-gw1/.env --project-name roottestencrypteddisk-gw1 --file /ClickHouse/tests/integration/test_encrypted_disk/_instances-0-gw1/node/docker-compose.yml --file /ClickHouse/tests/integration/helpers/../../../tests/integration/compose/docker_compose_minio.yml up -d --no-recreate] Executing query SELECT count(), disk_name FROM system.parts WHERE table = 'data_least_used_detect_background_changes' GROUP BY disk_name ORDER BY disk_name on node Executing query OPTIMIZE TABLE test.graphite PARTITION 201702 FINAL on instance Executing query INSERT INTO test.simple VALUES (1, 'abc'), (2, 'def') on instance Executing query CREATE TABLE test_hedged (id UInt32, date Date) ENGINE = MergeTree() ORDER BY id PARTITION BY toYYYYMM(date) on node_4 [gw7] PASSED test_external_http_authenticator/test.py::test_basic_auth_failed test_external_http_authenticator/test.py::test_session_settings_from_auth_response Executing query SELECT currentUser() on node Executing query DROP TABLE IF EXISTS dist on n2 Connecting to 172.16.10.2(172.16.10.2):2181, use_ssl: False Connection dropped: socket connection error: Connection refused Executing query SELECT * FROM test.graphite ORDER BY (metric, timestamp) on instance run container_id:roottestjbodloadbalancing-gw8-node-1 detach:False nothrow:False cmd: ['rm', '-f', '/jbod3/.test'] Command:[docker exec roottestjbodloadbalancing-gw8-node-1 rm -f /jbod3/.test] Executing query SELECT * FROM test.simple FORMAT Protobuf SETTINGS format_schema='message_tmp:MessageTmp' on instance via HTTP interface Starting new HTTP connection (1): 172.16.7.2:8123 http://172.16.7.2:8123 "GET /?query=SELECT+%2A+FROM+test.simple+FORMAT+Protobuf+SETTINGS+format_schema%3D%27message_tmp%3AMessageTmp%27 HTTP/1.1" 200 None Executing query DROP TABLE IF EXISTS test.new_simple on instance Executing query INSERT INTO test_hedged SELECT number, toDateTime(number) FROM numbers(100) on node_4 Executing query DROP TABLE IF EXISTS data_least_used_detect_background_changes SYNC on node Executing query SYSTEM FLUSH LOGS on node [gw9] PASSED test_grpc_protocol_ssl/test.py::test_insecure_channel test_grpc_protocol_ssl/test.py::test_secure_channel Stderr: Container roottestencrypteddisk-gw1-proxy2-1 Running Stderr: Container roottestencrypteddisk-gw1-proxy1-1 Running Stderr: Container roottestencrypteddisk-gw1-resolver-1 Running Stderr: Container roottestencrypteddisk-gw1-minio1-1 Running Stderr: Container roottestencrypteddisk-gw1-node-1 Creating Stderr: Container roottestencrypteddisk-gw1-node-1 Created Stderr: Container roottestencrypteddisk-gw1-node-1 Starting Stderr: Container roottestencrypteddisk-gw1-node-1 Started ClickHouse instance created get_instance_ip instance_name=node http://localhost:None "GET /v1.46/containers/roottestencrypteddisk-gw1-node-1/json HTTP/1.1" 200 None get_instance_ip instance_name=node http://localhost:None "GET /v1.46/containers/roottestencrypteddisk-gw1-node-1/json HTTP/1.1" 200 None Waiting for ClickHouse start in node, ip: 172.16.8.6... http://localhost:None "GET /v1.46/containers/roottestencrypteddisk-gw1-node-1/json HTTP/1.1" 200 None http://localhost:None "GET /v1.46/containers/9102fe9ff9de7fbbf0c6824e1eaf5d1fa7c942ef4716f771202e7d33b741363e/json HTTP/1.1" 200 None Executing query DROP TABLE IF EXISTS data on n3 http://localhost:None "GET /v1.46/containers/9102fe9ff9de7fbbf0c6824e1eaf5d1fa7c942ef4716f771202e7d33b741363e/json HTTP/1.1" 200 None Executing query CREATE TABLE test.new_simple (key2 UInt64, value2 String) ENGINE = MergeTree ORDER BY tuple(); on instance Executing query DROP TABLE test.graphite on instance [gw5] PASSED test_graphite_merge_tree/test.py::test_combined_rules [gw8] PASSED test_jbod_load_balancing/test.py::test_jbod_load_balancing_least_used_detect_background_changes Executing query CREATE TABLE data_least_used_next_disk ( s String CODEC(NONE) ) ENGINE = MergeTree ORDER BY tuple() SETTINGS storage_policy = 'jbod_least_used'; SYSTEM STOP MERGES data_least_used_next_disk; -- 100MiB each part, 3 parts in total INSERT INTO data_least_used_next_disk SELECT repeat('a', 100) FROM numbers(3e6) SETTINGS max_block_size='1Mi'; on node test_jbod_load_balancing/test.py::test_jbod_load_balancing_least_used_next_disk http://localhost:None "GET /v1.46/containers/9102fe9ff9de7fbbf0c6824e1eaf5d1fa7c942ef4716f771202e7d33b741363e/json HTTP/1.1" 200 None Executing query CREATE TABLE test_hedged (id UInt32, date Date) ENGINE = MergeTree() ORDER BY id PARTITION BY toYYYYMM(date) on node [gw9] PASSED test_grpc_protocol_ssl/test.py::test_secure_channel test_grpc_protocol_ssl/test.py::test_wrong_client_certificate Executing query DROP TABLE IF EXISTS dist on n3 http://localhost:None "GET /v1.46/containers/9102fe9ff9de7fbbf0c6824e1eaf5d1fa7c942ef4716f771202e7d33b741363e/json HTTP/1.1" 200 None Executing query INSERT INTO test.new_simple VALUES (1, 'abc'), (2, 'def') on instance test_graphite_merge_tree/test.py::test_combined_rules_with_default Executing query DROP TABLE IF EXISTS test.graphite; CREATE TABLE test.graphite (metric String, value Float64, timestamp UInt32, date Date, updated UInt32) ENGINE = GraphiteMergeTree('graphite_rollup') PARTITION BY toYYYYMM(date) ORDER BY (metric, timestamp) SETTINGS index_granularity=8192; on instance http://localhost:None "GET /v1.46/containers/9102fe9ff9de7fbbf0c6824e1eaf5d1fa7c942ef4716f771202e7d33b741363e/json HTTP/1.1" 200 None Executing query INSERT INTO test_hedged SELECT number, toDateTime(number) FROM numbers(100) on node Executing query DROP TABLE IF EXISTS data on n4 http://localhost:None "GET /v1.46/containers/9102fe9ff9de7fbbf0c6824e1eaf5d1fa7c942ef4716f771202e7d33b741363e/json HTTP/1.1" 200 None Executing query SYSTEM DROP FORMAT SCHEMA CACHE FOR Protobuf on instance Executing query DROP TABLE IF EXISTS test.graphite; CREATE TABLE test.graphite (metric String, value Float64, timestamp UInt32, date Date, updated UInt32) ENGINE = GraphiteMergeTree('graphite_rollup_with_default') PARTITION BY toYYYYMM(date) ORDER BY (metric, timestamp) SETTINGS index_granularity=1; on instance http://localhost:None "GET /v1.46/containers/9102fe9ff9de7fbbf0c6824e1eaf5d1fa7c942ef4716f771202e7d33b741363e/json HTTP/1.1" 200 None Executing query CREATE TABLE distributed (id UInt32, date Date) ENGINE = Distributed('test_cluster', 'default', 'test_hedged') on node Executing query DROP TABLE IF EXISTS dist on n4 http://localhost:None "GET /v1.46/containers/9102fe9ff9de7fbbf0c6824e1eaf5d1fa7c942ef4716f771202e7d33b741363e/json HTTP/1.1" 200 None Executing query select Settings from system.query_log where type = 'QueryFinish' and query_id = 'test_query_test_user_1' FORMAT JSON on node Executing query SELECT * FROM test.new_simple FORMAT Protobuf SETTINGS format_schema='message_tmp:MessageTmp' on instance via HTTP interface Starting new HTTP connection (1): 172.16.7.2:8123 http://172.16.7.2:8123 "GET /?query=SELECT+%2A+FROM+test.new_simple+FORMAT+Protobuf+SETTINGS+format_schema%3D%27message_tmp%3AMessageTmp%27 HTTP/1.1" 200 None Executing query SELECT * FROM test.simple FORMAT Protobuf SETTINGS format_schema='message_tmp:MessageTmp' on instance via HTTP interface Starting new HTTP connection (1): 172.16.7.2:8123 Executing query INSERT INTO test.graphite VALUES ('top_level.count', 1, 1487970000, toDate(1487970000), 1), ('top_level.max', 0, 1487970000, toDate(1487970000), 1), ('top_level.count', 1, 1487970060, toDate(1487970060), 1), ('top_level.max', 1, 1487970060, toDate(1487970060), 1), ('top_level.count', 1, 1487970120, toDate(1487970120), 1), ('top_level.max', 2, 1487970120, toDate(1487970120), 1), ('top_level.count', 1, 1487970180, toDate(1487970180), 1), ('top_level.max', 3, 1487970180, toDate(1487970180), 1), ('top_level.count', 1, 1487970240, toDate(1487970240), 1), ('top_level.max', 4, 1487970240, toDate(1487970240), 1), ('top_level.count', 1, 1487970300, toDate(1487970300), 1), ('top_level.max', 5, 1487970300, toDate(1487970300), 1), ('top_level.count', 1, 1487970360, toDate(1487970360), 1), ('top_level.max', 6, 1487970360, toDate(1487970360), 1), ('top_level.count', 1, 1487970420, toDate(1487970420), 1), ('top_level.max', 7, 1487970420, toDate(1487970420), 1), ('top_level.count', 1, 1487970480, toDa on instance Connecting to 172.16.5.4(172.16.5.4):2181, use_ssl: False Sending request(xid=None): Connect(protocol_version=0, last_zxid_seen=0, time_out=30000, session_id=0, passwd=b'\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00', read_only=None) http://localhost:None "GET /v1.46/containers/9102fe9ff9de7fbbf0c6824e1eaf5d1fa7c942ef4716f771202e7d33b741363e/json HTTP/1.1" 200 None Zookeeper connection established, state: CONNECTED Sending request(xid=1): GetChildren(path='/', watcher=None) Received response(xid=1): ['keeper'] Sending request(xid=2): Close() Connection dropped: socket connection broken Transition to CONNECTING Zookeeper connection lost Executing query SELECT value FROM system.build_options WHERE name = 'CXX_FLAGS' on node Executing query CREATE TABLE data (key Int, value String) Engine=MergeTree() ORDER BY key on n1 http://172.16.7.2:8123 "GET /?query=SELECT+%2A+FROM+test.simple+FORMAT+Protobuf+SETTINGS+format_schema%3D%27message_tmp%3AMessageTmp%27 HTTP/1.1" 500 None [gw6] PASSED test_format_schema_on_server/test.py::test_drop_cache_protobuf_format test_format_schema_on_server/test.py::test_drop_capn_proto_format Executing query DROP TABLE IF EXISTS test.simple on instance http://localhost:None "GET /v1.46/containers/9102fe9ff9de7fbbf0c6824e1eaf5d1fa7c942ef4716f771202e7d33b741363e/json HTTP/1.1" 200 None Failed connecting to Zookeeper within the connection retry policy. Zookeeper session closed, state: CLOSED get_instance_ip instance_name=zoo2 http://localhost:None "GET /v1.46/containers/roottestfilecluster-gw2-zoo2-1/json HTTP/1.1" 200 None get_kazoo_client: zoo2, ip:172.16.5.2, port:2181, use_ssl:False Connecting to 172.16.5.2(172.16.5.2):2181, use_ssl: False Sending request(xid=None): Connect(protocol_version=0, last_zxid_seen=0, time_out=30000, session_id=0, passwd=b'\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00', read_only=None) Zookeeper connection established, state: CONNECTED Sending request(xid=1): GetChildren(path='/', watcher=None) Received response(xid=1): ['keeper'] Sending request(xid=2): Close() Connection dropped: socket connection broken Transition to CONNECTING Zookeeper connection lost Connecting to 172.16.10.2(172.16.10.2):2181, use_ssl: False Sending request(xid=None): Connect(protocol_version=0, last_zxid_seen=0, time_out=30000, session_id=0, passwd=b'\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00', read_only=None) Zookeeper connection established, state: CONNECTED Sending request(xid=1): GetChildren(path='/', watcher=None) Received response(xid=1): ['keeper'] Sending request(xid=2): Close() Connection dropped: socket connection broken Transition to CONNECTING Zookeeper connection lost Executing query SELECT metric, value, timestamp FROM test.graphite ORDER BY (timestamp, metric) on instance Failed connecting to Zookeeper within the connection retry policy. Zookeeper session closed, state: CLOSED get_instance_ip instance_name=zoo3 http://localhost:None "GET /v1.46/containers/roottestfilecluster-gw2-zoo3-1/json HTTP/1.1" 200 None get_kazoo_client: zoo3, ip:172.16.5.3, port:2181, use_ssl:False Connecting to 172.16.5.3(172.16.5.3):2181, use_ssl: False Sending request(xid=None): Connect(protocol_version=0, last_zxid_seen=0, time_out=30000, session_id=0, passwd=b'\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00', read_only=None) Zookeeper connection established, state: CONNECTED Sending request(xid=1): GetChildren(path='/', watcher=None) Received response(xid=1): ['keeper'] Sending request(xid=2): Close() Connection dropped: socket connection broken Transition to CONNECTING Zookeeper connection lost http://localhost:None "GET /v1.46/containers/9102fe9ff9de7fbbf0c6824e1eaf5d1fa7c942ef4716f771202e7d33b741363e/json HTTP/1.1" 200 None Executing query CREATE TABLE test.simple (key UInt64, value String) ENGINE = MergeTree ORDER BY tuple(); on instance Failed connecting to Zookeeper within the connection retry policy. Zookeeper session closed, state: CLOSED get_instance_ip instance_name=zoo2 http://localhost:None "GET /v1.46/containers/roottestkeeperclient-gw4-zoo2-1/json HTTP/1.1" 200 None get_kazoo_client: zoo2, ip:172.16.10.3, port:2181, use_ssl:False Connecting to 172.16.10.3(172.16.10.3):2181, use_ssl: False Sending request(xid=None): Connect(protocol_version=0, last_zxid_seen=0, time_out=30000, session_id=0, passwd=b'\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00', read_only=None) Zookeeper connection established, state: CONNECTED Sending request(xid=1): GetChildren(path='/', watcher=None) Received response(xid=1): ['keeper'] Sending request(xid=2): Close() run container_id:roottesthedgedrequestsparallel-gw3-node_1-1 detach:False nothrow:False cmd: ['bash', '-c', "echo '\n \n \n 1000\n 0\n \n \n' > /etc/clickhouse-server/users.d/users1.xml"] Command:[docker exec roottesthedgedrequestsparallel-gw3-node_1-1 bash -c echo ' 1000 0 ' > /etc/clickhouse-server/users.d/users1.xml] Connection dropped: socket connection broken Transition to CONNECTING Zookeeper connection lost Executing query CREATE TABLE dist AS data Engine=Distributed( insert_distributed_async_send_cluster_two_replicas, currentDatabase(), data, key ) on n1 Executing query SELECT currentUser() on node Executing query SELECT count(), disk_name FROM system.parts WHERE table = 'data_least_used_next_disk' GROUP BY disk_name ORDER BY disk_name on node run container_id:roottesthedgedrequestsparallel-gw3-node_2-1 detach:False nothrow:False cmd: ['bash', '-c', "echo '\n \n \n 1000\n 0\n \n \n' > /etc/clickhouse-server/users.d/users1.xml"] Command:[docker exec roottesthedgedrequestsparallel-gw3-node_2-1 bash -c echo ' 1000 0 ' > /etc/clickhouse-server/users.d/users1.xml] Failed connecting to Zookeeper within the connection retry policy. Zookeeper session closed, state: CLOSED All instances of ZooKeeper started: ('zoo1', 'zoo2', 'zoo3') ('Trying to create ClickHouse instance by command %s', 'docker compose --env-file /ClickHouse/tests/integration/test_file_cluster/_instances-0-gw2/.env --project-name roottestfilecluster-gw2 --file /ClickHouse/tests/integration/test_file_cluster/_instances-0-gw2/s0_0_0/docker-compose.yml --file /ClickHouse/tests/integration/helpers/../../../tests/integration/compose/docker_compose_keeper.yml --file /ClickHouse/tests/integration/test_file_cluster/_instances-0-gw2/s0_0_1/docker-compose.yml --file /ClickHouse/tests/integration/test_file_cluster/_instances-0-gw2/s0_1_0/docker-compose.yml up -d --no-recreate') Command:[docker compose --env-file /ClickHouse/tests/integration/test_file_cluster/_instances-0-gw2/.env --project-name roottestfilecluster-gw2 --file /ClickHouse/tests/integration/test_file_cluster/_instances-0-gw2/s0_0_0/docker-compose.yml --file /ClickHouse/tests/integration/helpers/../../../tests/integration/compose/docker_compose_keeper.yml --file /ClickHouse/tests/integration/test_file_cluster/_instances-0-gw2/s0_0_1/docker-compose.yml --file /ClickHouse/tests/integration/test_file_cluster/_instances-0-gw2/s0_1_0/docker-compose.yml up -d --no-recreate] http://localhost:None "GET /v1.46/containers/9102fe9ff9de7fbbf0c6824e1eaf5d1fa7c942ef4716f771202e7d33b741363e/json HTTP/1.1" 200 None run container_id:roottesthedgedrequestsparallel-gw3-node_3-1 detach:False nothrow:False cmd: ['bash', '-c', "echo '\n \n \n 0\n 30000\n \n \n' > /etc/clickhouse-server/users.d/users1.xml"] Command:[docker exec roottesthedgedrequestsparallel-gw3-node_3-1 bash -c echo ' 0 30000 ' > /etc/clickhouse-server/users.d/users1.xml] Failed connecting to Zookeeper within the connection retry policy. Zookeeper session closed, state: CLOSED get_instance_ip instance_name=zoo3 http://localhost:None "GET /v1.46/containers/roottestkeeperclient-gw4-zoo3-1/json HTTP/1.1" 200 None get_kazoo_client: zoo3, ip:172.16.10.4, port:2181, use_ssl:False Connecting to 172.16.10.4(172.16.10.4):2181, use_ssl: False Sending request(xid=None): Connect(protocol_version=0, last_zxid_seen=0, time_out=30000, session_id=0, passwd=b'\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00', read_only=None) Zookeeper connection established, state: CONNECTED Sending request(xid=1): GetChildren(path='/', watcher=None) Received response(xid=1): ['keeper'] Sending request(xid=2): Close() Connection dropped: socket connection broken Transition to CONNECTING Zookeeper connection lost run container_id:roottesthedgedrequestsparallel-gw3-node_4-1 detach:False nothrow:False cmd: ['bash', '-c', "echo '\n \n \n 0\n 0\n \n \n' > /etc/clickhouse-server/users.d/users1.xml"] Command:[docker exec roottesthedgedrequestsparallel-gw3-node_4-1 bash -c echo ' 0 0 ' > /etc/clickhouse-server/users.d/users1.xml] http://localhost:None "GET /v1.46/containers/9102fe9ff9de7fbbf0c6824e1eaf5d1fa7c942ef4716f771202e7d33b741363e/json HTTP/1.1" 200 None Executing query INSERT INTO test.simple VALUES (1, 'abc'), (2, 'def') on instance Executing query SELECT value FROM system.settings WHERE name='sleep_in_send_tables_status_ms' on node_1 via HTTP interface Starting new HTTP connection (1): 172.16.4.3:8123 Executing query OPTIMIZE TABLE test.graphite PARTITION 201702 FINAL on instance Failed connecting to Zookeeper within the connection retry policy. Zookeeper session closed, state: CLOSED All instances of ZooKeeper started: ('zoo1', 'zoo2', 'zoo3') ('Trying to create ClickHouse instance by command %s', 'docker compose --env-file /ClickHouse/tests/integration/test_keeper_client/_instances-0-gw4/.env --project-name roottestkeeperclient-gw4 --file /ClickHouse/tests/integration/test_keeper_client/_instances-0-gw4/node/docker-compose.yml --file /ClickHouse/tests/integration/helpers/../../../tests/integration/compose/docker_compose_keeper.yml up -d --no-recreate') Command:[docker compose --env-file /ClickHouse/tests/integration/test_keeper_client/_instances-0-gw4/.env --project-name roottestkeeperclient-gw4 --file /ClickHouse/tests/integration/test_keeper_client/_instances-0-gw4/node/docker-compose.yml --file /ClickHouse/tests/integration/helpers/../../../tests/integration/compose/docker_compose_keeper.yml up -d --no-recreate] Executing query SYSTEM STOP DISTRIBUTED SENDS dist on n1 http://172.16.4.3:8123 "GET /?query=SELECT+value+FROM+system.settings+WHERE+name%3D%27sleep_in_send_tables_status_ms%27 HTTP/1.1" 200 None Executing query SYSTEM FLUSH LOGS on node Executing query SELECT value FROM system.settings WHERE name='sleep_in_send_data_ms' on node_1 via HTTP interface Starting new HTTP connection (1): 172.16.4.3:8123 http://172.16.4.3:8123 "GET /?query=SELECT+value+FROM+system.settings+WHERE+name%3D%27sleep_in_send_data_ms%27 HTTP/1.1" 200 None http://localhost:None "GET /v1.46/containers/9102fe9ff9de7fbbf0c6824e1eaf5d1fa7c942ef4716f771202e7d33b741363e/json HTTP/1.1" 200 None Executing query DROP TABLE IF EXISTS data_least_used_next_disk SYNC on node Executing query SELECT value FROM system.settings WHERE name='sleep_in_send_tables_status_ms' on node_1 via HTTP interface Starting new HTTP connection (1): 172.16.4.3:8123 http://localhost:None "GET /v1.46/containers/9102fe9ff9de7fbbf0c6824e1eaf5d1fa7c942ef4716f771202e7d33b741363e/json HTTP/1.1" 200 None http://172.16.4.3:8123 "GET /?query=SELECT+value+FROM+system.settings+WHERE+name%3D%27sleep_in_send_tables_status_ms%27 HTTP/1.1" 200 None Executing query SELECT value FROM system.settings WHERE name='sleep_in_send_data_ms' on node_1 via HTTP interface Starting new HTTP connection (1): 172.16.4.3:8123 http://172.16.4.3:8123 "GET /?query=SELECT+value+FROM+system.settings+WHERE+name%3D%27sleep_in_send_data_ms%27 HTTP/1.1" 200 None Executing query SELECT * FROM test.simple FORMAT CapnProto SETTINGS format_schema='message_tmp:MessageTmp' on instance via HTTP interface Starting new HTTP connection (1): 172.16.7.2:8123 http://172.16.7.2:8123 "GET /?query=SELECT+%2A+FROM+test.simple+FORMAT+CapnProto+SETTINGS+format_schema%3D%27message_tmp%3AMessageTmp%27 HTTP/1.1" 200 None Executing query SELECT * FROM test.simple Format CapnProto SETTINGS format_schema='/ClickHouse/tests/integration/test_format_schema_on_server/_instances-0-gw6/instance/database/format_schemas/message_tmp:MessageTmp' on instance Executing query CREATE TABLE data (key Int, value String) Engine=MergeTree() ORDER BY key on n2 http://localhost:None "GET /v1.46/containers/9102fe9ff9de7fbbf0c6824e1eaf5d1fa7c942ef4716f771202e7d33b741363e/json HTTP/1.1" 200 None ClickHouse node started Executing query SELECT policy_name FROM system.storage_policies on node Executing query SELECT value FROM system.settings WHERE name='sleep_in_send_tables_status_ms' on node_1 via HTTP interface Starting new HTTP connection (1): 172.16.4.3:8123 http://172.16.4.3:8123 "GET /?query=SELECT+value+FROM+system.settings+WHERE+name%3D%27sleep_in_send_tables_status_ms%27 HTTP/1.1" 200 None Executing query SELECT value FROM system.settings WHERE name='sleep_in_send_data_ms' on node_1 via HTTP interface Starting new HTTP connection (1): 172.16.4.3:8123 http://172.16.4.3:8123 "GET /?query=SELECT+value+FROM+system.settings+WHERE+name%3D%27sleep_in_send_data_ms%27 HTTP/1.1" 200 None Executing query SELECT * FROM test.graphite ORDER BY (metric, timestamp) on instance Stderr: Container roottestfilecluster-gw2-zoo1-1 Running Stderr: Container roottestfilecluster-gw2-zoo2-1 Running Stderr: Container roottestfilecluster-gw2-zoo3-1 Running Stderr: Container roottestfilecluster-gw2-s0_0_0-1 Creating Stderr: Container roottestfilecluster-gw2-s0_0_1-1 Creating Stderr: Container roottestfilecluster-gw2-s0_1_0-1 Creating Stderr: Container roottestfilecluster-gw2-s0_0_0-1 Created Stderr: Container roottestfilecluster-gw2-s0_0_1-1 Created Stderr: Container roottestfilecluster-gw2-s0_1_0-1 Created Stderr: Container roottestfilecluster-gw2-s0_0_0-1 Starting Stderr: Container roottestfilecluster-gw2-s0_0_1-1 Starting Stderr: Container roottestfilecluster-gw2-s0_1_0-1 Starting Stderr: Container roottestfilecluster-gw2-s0_1_0-1 Started Stderr: Container roottestfilecluster-gw2-s0_0_1-1 Started Stderr: Container roottestfilecluster-gw2-s0_0_0-1 Started ClickHouse instance created get_instance_ip instance_name=s0_0_0 http://localhost:None "GET /v1.46/containers/roottestfilecluster-gw2-s0_0_0-1/json HTTP/1.1" 200 None get_instance_ip instance_name=s0_0_0 http://localhost:None "GET /v1.46/containers/roottestfilecluster-gw2-s0_0_0-1/json HTTP/1.1" 200 None Waiting for ClickHouse start in s0_0_0, ip: 172.16.5.6... http://localhost:None "GET /v1.46/containers/roottestfilecluster-gw2-s0_0_0-1/json HTTP/1.1" 200 None http://localhost:None "GET /v1.46/containers/d86d7cfabee32b65434555bb4e85dc0c28e6582a158550e99cb2dedc510543cc/json HTTP/1.1" 200 None Executing query SELECT value FROM system.settings WHERE name='sleep_in_send_tables_status_ms' on node_1 via HTTP interface Starting new HTTP connection (1): 172.16.4.3:8123 http://172.16.4.3:8123 "GET /?query=SELECT+value+FROM+system.settings+WHERE+name%3D%27sleep_in_send_tables_status_ms%27 HTTP/1.1" 200 None Executing query SELECT value FROM system.settings WHERE name='sleep_in_send_data_ms' on node_1 via HTTP interface Starting new HTTP connection (1): 172.16.4.3:8123 http://172.16.4.3:8123 "GET /?query=SELECT+value+FROM+system.settings+WHERE+name%3D%27sleep_in_send_data_ms%27 HTTP/1.1" 200 None Executing query CREATE TABLE dist AS data Engine=Distributed( insert_distributed_async_send_cluster_two_replicas, currentDatabase(), data, key ) on n2 Executing query DROP TABLE IF EXISTS test.new_simple on instance http://localhost:None "GET /v1.46/containers/d86d7cfabee32b65434555bb4e85dc0c28e6582a158550e99cb2dedc510543cc/json HTTP/1.1" 200 None Stderr: Container roottestkeeperclient-gw4-zoo1-1 Running Stderr: Container roottestkeeperclient-gw4-zoo2-1 Running Stderr: Container roottestkeeperclient-gw4-zoo3-1 Running Stderr: Container roottestkeeperclient-gw4-node-1 Creating Stderr: Container roottestkeeperclient-gw4-node-1 Created Stderr: Container roottestkeeperclient-gw4-node-1 Starting Stderr: Container roottestkeeperclient-gw4-node-1 Started ClickHouse instance created get_instance_ip instance_name=node [gw8] PASSED test_jbod_load_balancing/test.py::test_jbod_load_balancing_least_used_next_disk Executing query CREATE TABLE data_round_robin (p UInt8) ENGINE = MergeTree ORDER BY tuple() SETTINGS storage_policy = 'jbod_round_robin'; SYSTEM STOP MERGES data_round_robin; INSERT INTO data_round_robin SELECT * FROM numbers(10); INSERT INTO data_round_robin SELECT * FROM numbers(10); INSERT INTO data_round_robin SELECT * FROM numbers(10); INSERT INTO data_round_robin SELECT * FROM numbers(10); on node test_jbod_load_balancing/test.py::test_jbod_load_balancing_round_robin http://localhost:None "GET /v1.46/containers/roottestkeeperclient-gw4-node-1/json HTTP/1.1" 200 None get_instance_ip instance_name=node http://localhost:None "GET /v1.46/containers/roottestkeeperclient-gw4-node-1/json HTTP/1.1" 200 None Waiting for ClickHouse start in node, ip: 172.16.10.5... http://localhost:None "GET /v1.46/containers/roottestkeeperclient-gw4-node-1/json HTTP/1.1" 200 None http://localhost:None "GET /v1.46/containers/06b7eda7022cda7e83c71048ccdde303a07284263e7bfca329a33a968e8782c7/json HTTP/1.1" 200 None Executing query SELECT value FROM system.settings WHERE name='sleep_in_send_tables_status_ms' on node_1 via HTTP interface Starting new HTTP connection (1): 172.16.4.3:8123 http://172.16.4.3:8123 "GET /?query=SELECT+value+FROM+system.settings+WHERE+name%3D%27sleep_in_send_tables_status_ms%27 HTTP/1.1" 200 None Executing query SELECT value FROM system.settings WHERE name='sleep_in_send_data_ms' on node_1 via HTTP interface Starting new HTTP connection (1): 172.16.4.3:8123 http://172.16.4.3:8123 "GET /?query=SELECT+value+FROM+system.settings+WHERE+name%3D%27sleep_in_send_data_ms%27 HTTP/1.1" 200 None http://localhost:None "GET /v1.46/containers/d86d7cfabee32b65434555bb4e85dc0c28e6582a158550e99cb2dedc510543cc/json HTTP/1.1" 200 None run container_id:roottestencrypteddisk-gw1-node-1 detach:False nothrow:False cmd: ['bash', '-c', 'cat > /etc/clickhouse-server/config.d/storage_policy_encrypted_policy_multikeys.xml << EOF\n\n\n \n \n encrypted\n disk_local\n encrypted_policy_multikeys_dir/\n firstfirstfirstf\n \n \n \n \n \n
\n encrypted_policy_multikeys_disk\n
\n
\n
\n
\n
\n
\nEOF'] Command:[docker exec roottestencrypteddisk-gw1-node-1 bash -c cat > /etc/clickhouse-server/config.d/storage_policy_encrypted_policy_multikeys.xml << EOF encrypted disk_local encrypted_policy_multikeys_dir/ firstfirstfirstf
encrypted_policy_multikeys_disk
EOF] http://localhost:None "GET /v1.46/containers/06b7eda7022cda7e83c71048ccdde303a07284263e7bfca329a33a968e8782c7/json HTTP/1.1" 200 None Executing query SYSTEM RELOAD CONFIG on node Executing query DROP TABLE test.graphite on instance [gw5] PASSED test_graphite_merge_tree/test.py::test_combined_rules_with_default Executing query SYSTEM STOP DISTRIBUTED SENDS dist on n2 Executing query SELECT value FROM system.settings WHERE name='sleep_in_send_tables_status_ms' on node_1 via HTTP interface Starting new HTTP connection (1): 172.16.4.3:8123 Executing query CREATE TABLE test.new_simple (key2 UInt64, value2 String) ENGINE = MergeTree ORDER BY tuple(); on instance Executing query select Settings from system.query_log where type = 'QueryFinish' and query_id = 'test_query_test_user_2' FORMAT JSON on node http://localhost:None "GET /v1.46/containers/d86d7cfabee32b65434555bb4e85dc0c28e6582a158550e99cb2dedc510543cc/json HTTP/1.1" 200 None http://172.16.4.3:8123 "GET /?query=SELECT+value+FROM+system.settings+WHERE+name%3D%27sleep_in_send_tables_status_ms%27 HTTP/1.1" 200 None Executing query SELECT value FROM system.settings WHERE name='sleep_in_send_data_ms' on node_1 via HTTP interface Starting new HTTP connection (1): 172.16.4.3:8123 http://localhost:None "GET /v1.46/containers/06b7eda7022cda7e83c71048ccdde303a07284263e7bfca329a33a968e8782c7/json HTTP/1.1" 200 None http://172.16.4.3:8123 "GET /?query=SELECT+value+FROM+system.settings+WHERE+name%3D%27sleep_in_send_data_ms%27 HTTP/1.1" 200 None http://localhost:None "GET /v1.46/containers/d86d7cfabee32b65434555bb4e85dc0c28e6582a158550e99cb2dedc510543cc/json HTTP/1.1" 200 None http://localhost:None "GET /v1.46/containers/06b7eda7022cda7e83c71048ccdde303a07284263e7bfca329a33a968e8782c7/json HTTP/1.1" 200 None Executing query SELECT count(), disk_name FROM system.parts WHERE table = 'data_round_robin' GROUP BY disk_name ORDER BY disk_name on node Executing query SELECT value FROM system.settings WHERE name='sleep_in_send_tables_status_ms' on node_1 via HTTP interface Starting new HTTP connection (1): 172.16.4.3:8123 test_graphite_merge_tree/test.py::test_multiple_output_blocks Executing query DROP TABLE IF EXISTS test.graphite; CREATE TABLE test.graphite (metric String, value Float64, timestamp UInt32, date Date, updated UInt32) ENGINE = GraphiteMergeTree('graphite_rollup') PARTITION BY toYYYYMM(date) ORDER BY (metric, timestamp) SETTINGS index_granularity=8192; on instance http://172.16.4.3:8123 "GET /?query=SELECT+value+FROM+system.settings+WHERE+name%3D%27sleep_in_send_tables_status_ms%27 HTTP/1.1" 200 None Executing query SELECT value FROM system.settings WHERE name='sleep_in_send_data_ms' on node_1 via HTTP interface Starting new HTTP connection (1): 172.16.4.3:8123 Executing query CREATE TABLE data (key Int, value String) Engine=MergeTree() ORDER BY key on n3 http://172.16.4.3:8123 "GET /?query=SELECT+value+FROM+system.settings+WHERE+name%3D%27sleep_in_send_data_ms%27 HTTP/1.1" 200 None Command:[docker compose --env-file /ClickHouse/tests/integration/test_grpc_protocol_ssl/_instances-0-gw9/.env --project-name roottestgrpcprotocolssl-gw9 --file /ClickHouse/tests/integration/test_grpc_protocol_ssl/_instances-0-gw9/node/docker-compose.yml stop --timeout 20] [gw9] PASSED test_grpc_protocol_ssl/test.py::test_wrong_client_certificate http://localhost:None "GET /v1.46/containers/d86d7cfabee32b65434555bb4e85dc0c28e6582a158550e99cb2dedc510543cc/json HTTP/1.1" 200 None Executing query INSERT INTO test.new_simple VALUES (1, 'abc'), (2, 'def') on instance http://localhost:None "GET /v1.46/containers/06b7eda7022cda7e83c71048ccdde303a07284263e7bfca329a33a968e8782c7/json HTTP/1.1" 200 None Executing query SELECT policy_name FROM system.storage_policies WHERE policy_name='encrypted_policy_multikeys' on node Executing query SELECT currentUser() on node Executing query SELECT value FROM system.settings WHERE name='sleep_in_send_tables_status_ms' on node_1 via HTTP interface Starting new HTTP connection (1): 172.16.4.3:8123 http://localhost:None "GET /v1.46/containers/d86d7cfabee32b65434555bb4e85dc0c28e6582a158550e99cb2dedc510543cc/json HTTP/1.1" 200 None http://172.16.4.3:8123 "GET /?query=SELECT+value+FROM+system.settings+WHERE+name%3D%27sleep_in_send_tables_status_ms%27 HTTP/1.1" 200 None Executing query SELECT value FROM system.settings WHERE name='sleep_in_send_data_ms' on node_1 via HTTP interface Starting new HTTP connection (1): 172.16.4.3:8123 http://localhost:None "GET /v1.46/containers/06b7eda7022cda7e83c71048ccdde303a07284263e7bfca329a33a968e8782c7/json HTTP/1.1" 200 None http://172.16.4.3:8123 "GET /?query=SELECT+value+FROM+system.settings+WHERE+name%3D%27sleep_in_send_data_ms%27 HTTP/1.1" 200 None Executing query SELECT value FROM system.settings WHERE name='sleep_in_send_tables_status_ms' on node_2 via HTTP interface Starting new HTTP connection (1): 172.16.4.5:8123 http://172.16.4.5:8123 "GET /?query=SELECT+value+FROM+system.settings+WHERE+name%3D%27sleep_in_send_tables_status_ms%27 HTTP/1.1" 200 None Executing query SELECT value FROM system.settings WHERE name='sleep_in_send_data_ms' on node_2 via HTTP interface Starting new HTTP connection (1): 172.16.4.5:8123 http://172.16.4.5:8123 "GET /?query=SELECT+value+FROM+system.settings+WHERE+name%3D%27sleep_in_send_data_ms%27 HTTP/1.1" 200 None Executing query DROP TABLE IF EXISTS data_round_robin SYNC on node Executing query CREATE TABLE dist AS data Engine=Distributed( insert_distributed_async_send_cluster_two_replicas, currentDatabase(), data, key ) on n3 http://localhost:None "GET /v1.46/containers/d86d7cfabee32b65434555bb4e85dc0c28e6582a158550e99cb2dedc510543cc/json HTTP/1.1" 200 None http://localhost:None "GET /v1.46/containers/06b7eda7022cda7e83c71048ccdde303a07284263e7bfca329a33a968e8782c7/json HTTP/1.1" 200 None Executing query INSERT INTO test.graphite FORMAT TSV on instance Executing query SELECT * FROM test.new_simple FORMAT CapnProto SETTINGS format_schema='message_tmp:MessageTmp' on instance via HTTP interface Starting new HTTP connection (1): 172.16.7.2:8123 http://172.16.7.2:8123 "GET /?query=SELECT+%2A+FROM+test.new_simple+FORMAT+CapnProto+SETTINGS+format_schema%3D%27message_tmp%3AMessageTmp%27 HTTP/1.1" 200 None Executing query SELECT * FROM test.new_simple Format CapnProto SETTINGS format_schema='/ClickHouse/tests/integration/test_format_schema_on_server/_instances-0-gw6/instance/database/format_schemas/message_tmp:MessageTmp' on instance Executing query SELECT value FROM system.settings WHERE name='sleep_in_send_tables_status_ms' on node_2 via HTTP interface Starting new HTTP connection (1): 172.16.4.5:8123 http://172.16.4.5:8123 "GET /?query=SELECT+value+FROM+system.settings+WHERE+name%3D%27sleep_in_send_tables_status_ms%27 HTTP/1.1" 200 None Executing query SELECT value FROM system.settings WHERE name='sleep_in_send_data_ms' on node_2 via HTTP interface Starting new HTTP connection (1): 172.16.4.5:8123 http://localhost:None "GET /v1.46/containers/d86d7cfabee32b65434555bb4e85dc0c28e6582a158550e99cb2dedc510543cc/json HTTP/1.1" 200 None Executing query CREATE TABLE encrypted_test ( id Int64, data String ) ENGINE=MergeTree() ORDER BY id SETTINGS storage_policy='encrypted_policy_multikeys' on node http://172.16.4.5:8123 "GET /?query=SELECT+value+FROM+system.settings+WHERE+name%3D%27sleep_in_send_data_ms%27 HTTP/1.1" 200 None Executing query SELECT value FROM system.settings WHERE name='sleep_in_send_tables_status_ms' on node_3 via HTTP interface Starting new HTTP connection (1): 172.16.4.4:8123 http://localhost:None "GET /v1.46/containers/06b7eda7022cda7e83c71048ccdde303a07284263e7bfca329a33a968e8782c7/json HTTP/1.1" 200 None Executing query SYSTEM FLUSH LOGS on node http://172.16.4.4:8123 "GET /?query=SELECT+value+FROM+system.settings+WHERE+name%3D%27sleep_in_send_tables_status_ms%27 HTTP/1.1" 200 None Executing query SELECT value FROM system.settings WHERE name='sleep_in_send_data_ms' on node_3 via HTTP interface Starting new HTTP connection (1): 172.16.4.4:8123 Command:[docker compose --env-file /ClickHouse/tests/integration/test_jbod_load_balancing/_instances-0-gw8/.env --project-name roottestjbodloadbalancing-gw8 --file /ClickHouse/tests/integration/test_jbod_load_balancing/_instances-0-gw8/node/docker-compose.yml stop --timeout 20] [gw8] PASSED test_jbod_load_balancing/test.py::test_jbod_load_balancing_round_robin http://localhost:None "GET /v1.46/containers/d86d7cfabee32b65434555bb4e85dc0c28e6582a158550e99cb2dedc510543cc/json HTTP/1.1" 200 None Executing query SYSTEM STOP DISTRIBUTED SENDS dist on n3 http://localhost:None "GET /v1.46/containers/06b7eda7022cda7e83c71048ccdde303a07284263e7bfca329a33a968e8782c7/json HTTP/1.1" 200 None http://172.16.4.4:8123 "GET /?query=SELECT+value+FROM+system.settings+WHERE+name%3D%27sleep_in_send_data_ms%27 HTTP/1.1" 200 None Executing query SELECT value FROM system.settings WHERE name='sleep_in_send_tables_status_ms' on node_4 via HTTP interface Starting new HTTP connection (1): 172.16.4.2:8123 http://172.16.4.2:8123 "GET /?query=SELECT+value+FROM+system.settings+WHERE+name%3D%27sleep_in_send_tables_status_ms%27 HTTP/1.1" 200 None Executing query SELECT value FROM system.settings WHERE name='sleep_in_send_data_ms' on node_4 via HTTP interface Starting new HTTP connection (1): 172.16.4.2:8123 Executing query SELECT * FROM test.simple FORMAT CapnProto SETTINGS format_schema='message_tmp:MessageTmp' on instance via HTTP interface Starting new HTTP connection (1): 172.16.7.2:8123 http://172.16.4.2:8123 "GET /?query=SELECT+value+FROM+system.settings+WHERE+name%3D%27sleep_in_send_data_ms%27 HTTP/1.1" 200 None run container_id:roottesthedgedrequestsparallel-gw3-node-1 detach:False nothrow:True cmd: ['bash', '-c', 'ps -C clickhouse'] Command:[docker exec -u root roottesthedgedrequestsparallel-gw3-node-1 bash -c ps -C clickhouse] http://localhost:None "GET /v1.46/containers/d86d7cfabee32b65434555bb4e85dc0c28e6582a158550e99cb2dedc510543cc/json HTTP/1.1" 200 None http://localhost:None "GET /v1.46/containers/06b7eda7022cda7e83c71048ccdde303a07284263e7bfca329a33a968e8782c7/json HTTP/1.1" 200 None http://172.16.7.2:8123 "GET /?query=SELECT+%2A+FROM+test.simple+FORMAT+CapnProto+SETTINGS+format_schema%3D%27message_tmp%3AMessageTmp%27 HTTP/1.1" 400 None Executing query INSERT INTO encrypted_test VALUES (0,'data'),(1,'data') on node [gw6] PASSED test_format_schema_on_server/test.py::test_drop_capn_proto_format Executing query DROP TABLE IF EXISTS test.simple on instance test_format_schema_on_server/test.py::test_protobuf_format_input Stdout: PID TTY TIME CMD Stdout: 8 ? 00:00:01 clickhouse run container_id:roottesthedgedrequestsparallel-gw3-node-1 detach:False nothrow:False cmd: ['bash', '-c', 'pkill clickhouse'] Command:[docker exec -u root roottesthedgedrequestsparallel-gw3-node-1 bash -c pkill clickhouse] Executing query CREATE TABLE data (key Int, value String) Engine=MergeTree() ORDER BY key on n4 Executing query OPTIMIZE TABLE test.graphite PARTITION 200109 FINAL; SELECT * FROM test.graphite; on instance http://localhost:None "GET /v1.46/containers/d86d7cfabee32b65434555bb4e85dc0c28e6582a158550e99cb2dedc510543cc/json HTTP/1.1" 200 None run container_id:roottesthedgedrequestsparallel-gw3-node-1 detach:False nothrow:False cmd: ['bash', '-c', "ps ax | grep 'clickhouse' | grep -v 'grep' | grep -v 'coproc' | grep -v 'bash -c' | awk '{print $1}'"] Command:[docker exec roottesthedgedrequestsparallel-gw3-node-1 bash -c ps ax | grep 'clickhouse' | grep -v 'grep' | grep -v 'coproc' | grep -v 'bash -c' | awk '{print $1}'] http://localhost:None "GET /v1.46/containers/06b7eda7022cda7e83c71048ccdde303a07284263e7bfca329a33a968e8782c7/json HTTP/1.1" 200 None Stdout:8 http://localhost:None "GET /v1.46/containers/d86d7cfabee32b65434555bb4e85dc0c28e6582a158550e99cb2dedc510543cc/json HTTP/1.1" 200 None http://localhost:None "GET /v1.46/containers/06b7eda7022cda7e83c71048ccdde303a07284263e7bfca329a33a968e8782c7/json HTTP/1.1" 200 None Executing query CREATE TABLE test.simple (key UInt64, value String) ENGINE = MergeTree ORDER BY tuple(); on instance Executing query SELECT * FROM encrypted_test ORDER BY id FORMAT Values on node Executing query CREATE TABLE dist AS data Engine=Distributed( insert_distributed_async_send_cluster_two_replicas, currentDatabase(), data, key ) on n4 http://localhost:None "GET /v1.46/containers/d86d7cfabee32b65434555bb4e85dc0c28e6582a158550e99cb2dedc510543cc/json HTTP/1.1" 200 None http://localhost:None "GET /v1.46/containers/06b7eda7022cda7e83c71048ccdde303a07284263e7bfca329a33a968e8782c7/json HTTP/1.1" 200 None Executing query DROP TABLE test.graphite on instance [gw5] PASSED test_graphite_merge_tree/test.py::test_multiple_output_blocks Executing query INSERT INTO test.simple SETTINGS format_schema='simple:KeyValuePair' FORMAT Protobuf on instance via HTTP interface http://localhost:None "GET /v1.46/containers/d86d7cfabee32b65434555bb4e85dc0c28e6582a158550e99cb2dedc510543cc/json HTTP/1.1" 200 None ClickHouse s0_0_0 started get_instance_ip instance_name=s0_0_1 Starting new HTTP connection (1): 172.16.7.2:8123 http://localhost:None "GET /v1.46/containers/roottestfilecluster-gw2-s0_0_1-1/json HTTP/1.1" 200 None get_instance_ip instance_name=s0_0_1 http://localhost:None "GET /v1.46/containers/roottestfilecluster-gw2-s0_0_1-1/json HTTP/1.1" 200 None Waiting for ClickHouse start in s0_0_1, ip: 172.16.5.5... http://localhost:None "GET /v1.46/containers/roottestfilecluster-gw2-s0_0_1-1/json HTTP/1.1" 200 None http://localhost:None "GET /v1.46/containers/b0a6614f77271e7101b099117788d88a5161aab89959df32a111c63b2333703a/json HTTP/1.1" 200 None ClickHouse s0_0_1 started get_instance_ip instance_name=s0_1_0 http://172.16.7.2:8123 "POST /?query=INSERT+INTO+test.simple+SETTINGS+format_schema%3D%27simple%3AKeyValuePair%27+FORMAT+Protobuf HTTP/1.1" 200 None http://localhost:None "GET /v1.46/containers/roottestfilecluster-gw2-s0_1_0-1/json HTTP/1.1" 200 None Executing query SELECT * from test.simple on instance get_instance_ip instance_name=s0_1_0 http://localhost:None "GET /v1.46/containers/roottestfilecluster-gw2-s0_1_0-1/json HTTP/1.1" 200 None Waiting for ClickHouse start in s0_1_0, ip: 172.16.5.7... http://localhost:None "GET /v1.46/containers/roottestfilecluster-gw2-s0_1_0-1/json HTTP/1.1" 200 None http://localhost:None "GET /v1.46/containers/66e7629c2e1421f68f69a40224f628f7c0e763a0b330b028c6b333591eecdc97/json HTTP/1.1" 200 None ClickHouse s0_1_0 started Cluster started Executing query INSERT INTO TABLE FUNCTION file( 'file1.csv', 'CSV', 's String, i UInt32') VALUES ('file1',1) on s0_0_0 http://localhost:None "GET /v1.46/containers/06b7eda7022cda7e83c71048ccdde303a07284263e7bfca329a33a968e8782c7/json HTTP/1.1" 200 None Executing query SYSTEM STOP DISTRIBUTED SENDS dist on n4 run container_id:roottestencrypteddisk-gw1-node-1 detach:False nothrow:False cmd: ['bash', '-c', 'cat > /etc/clickhouse-server/config.d/storage_policy_encrypted_policy_multikeys.xml << EOF\n\n\n \n \n encrypted\n disk_local\n encrypted_policy_multikeys_dir/\n \n firstfirstfirstf\n secondsecondseco\n secondsecondseco\n \n \n \n \n \n \n
\n encrypted_policy_multikeys_disk\n
\n
\n
\n
\n
\n
\nEOF'] Command:[docker exec roottestencrypteddisk-gw1-node-1 bash -c cat > /etc/clickhouse-server/config.d/storage_policy_encrypted_policy_multikeys.xml << EOF encrypted disk_local encrypted_policy_multikeys_dir/ firstfirstfirstf secondsecondseco secondsecondseco
encrypted_policy_multikeys_disk
EOF] Executing query select Settings from system.query_log where type = 'QueryFinish' and query_id = 'test_query_test_user_3' FORMAT JSON on node Executing query SYSTEM RELOAD CONFIG on node http://localhost:None "GET /v1.46/containers/06b7eda7022cda7e83c71048ccdde303a07284263e7bfca329a33a968e8782c7/json HTTP/1.1" 200 None test_graphite_merge_tree/test.py::test_multiple_paths_and_versions Executing query DROP TABLE IF EXISTS test.graphite; CREATE TABLE test.graphite (metric String, value Float64, timestamp UInt32, date Date, updated UInt32) ENGINE = GraphiteMergeTree('graphite_rollup') PARTITION BY toYYYYMM(date) ORDER BY (metric, timestamp) SETTINGS index_granularity=8192; on instance [gw6] PASSED test_format_schema_on_server/test.py::test_protobuf_format_input Executing query DROP TABLE IF EXISTS test.simple on instance test_format_schema_on_server/test.py::test_protobuf_format_output http://localhost:None "GET /v1.46/containers/06b7eda7022cda7e83c71048ccdde303a07284263e7bfca329a33a968e8782c7/json HTTP/1.1" 200 None Executing query INSERT INTO TABLE FUNCTION file( 'file2.csv', 'CSV', 's String, i UInt32') VALUES ('file2',2) on s0_0_0 ClickHouse node started get_instance_ip instance_name=zoo1 http://localhost:None "GET /v1.46/containers/roottestkeeperclient-gw4-zoo1-1/json HTTP/1.1" 200 None Executing query INSERT INTO dist SELECT number, randomPrintableASCII(100) FROM numbers(10000) on n2 [gw4] PASSED test_keeper_client/test.py::test_base_commands Executing query SELECT currentUser() on node Executing query INSERT INTO encrypted_test VALUES (2,'data'),(3,'data') on node test_keeper_client/test.py::test_big_family get_instance_ip instance_name=zoo1 http://localhost:None "GET /v1.46/containers/roottestkeeperclient-gw4-zoo1-1/json HTTP/1.1" 200 None Executing query INSERT INTO test.graphite SELECT 'one_min.x' AS metric, toFloat64(number) AS value, toUInt32(1111111111 + intDiv(number, 3) * 600) AS timestamp, toDate('2017-02-02') AS date, toUInt32(100 - number) AS updated FROM (SELECT * FROM system.numbers LIMIT 50); OPTIMIZE TABLE test.graphite PARTITION 201702 FINAL; SELECT * FROM test.graphite; INSERT INTO test.graphite SELECT 'one_min.y' AS metric, toFloat64(number) AS value, toUInt32(1111111111 + number * 600) AS timestamp, toDate('2017-02-02') AS date, toUInt32(100 - number) AS updated FROM (SELECT * FROM system.numbers LIMIT 50); OPTIMIZE TABLE test.graphite PARTITION 201702 FINAL; SELECT * FROM test.graphite; on instance Executing query CREATE TABLE test.simple (key UInt64, value String) ENGINE = MergeTree ORDER BY tuple(); on instance Executing query INSERT INTO TABLE FUNCTION file( 'file1.csv', 'CSV', 's String, i UInt32') VALUES ('file1',1) on s0_0_1 run container_id:roottestinsertdistributedasyncsend-gw0-n2-1 detach:False nothrow:False cmd: ['bash', '-c', 'wc -c < /var/lib/clickhouse/data/default/dist/shard1_replica2/2.bin'] Command:[docker exec roottestinsertdistributedasyncsend-gw0-n2-1 bash -c wc -c < /var/lib/clickhouse/data/default/dist/shard1_replica2/2.bin] [gw4] PASSED test_keeper_client/test.py::test_big_family Stdout:1054718 run container_id:roottestinsertdistributedasyncsend-gw0-n2-1 detach:False nothrow:False cmd: ['bash', '-c', 'mv /var/lib/clickhouse/data/default/dist/shard1_replica2/2.bin /tmp/bin && head -c 1046526 /tmp/bin > /var/lib/clickhouse/data/default/dist/shard1_replica2/2.bin && head -c 8192 /dev/zero >> /var/lib/clickhouse/data/default/dist/shard1_replica2/2.bin'] Command:[docker exec roottestinsertdistributedasyncsend-gw0-n2-1 bash -c mv /var/lib/clickhouse/data/default/dist/shard1_replica2/2.bin /tmp/bin && head -c 1046526 /tmp/bin > /var/lib/clickhouse/data/default/dist/shard1_replica2/2.bin && head -c 8192 /dev/zero >> /var/lib/clickhouse/data/default/dist/shard1_replica2/2.bin] test_keeper_client/test.py::test_delete_stale_backups get_instance_ip instance_name=zoo1 http://localhost:None "GET /v1.46/containers/roottestkeeperclient-gw4-zoo1-1/json HTTP/1.1" 200 None Executing query SYSTEM FLUSH DISTRIBUTED dist on n2 Executing query SYSTEM FLUSH LOGS on node Executing query SELECT * FROM encrypted_test ORDER BY id FORMAT Values on node Executing query INSERT INTO test.simple VALUES (1, 'abc'), (2, 'def') on instance Executing query DROP TABLE test.graphite on instance [gw5] PASSED test_graphite_merge_tree/test.py::test_multiple_paths_and_versions Executing query INSERT INTO TABLE FUNCTION file( 'file2.csv', 'CSV', 's String, i UInt32') VALUES ('file2',2) on s0_0_1 [gw4] PASSED test_keeper_client/test.py::test_delete_stale_backups test_keeper_client/test.py::test_find_super_nodes get_instance_ip instance_name=zoo1 http://localhost:None "GET /v1.46/containers/roottestkeeperclient-gw4-zoo1-1/json HTTP/1.1" 200 None run container_id:roottesthedgedrequestsparallel-gw3-node-1 detach:False nothrow:False cmd: ['bash', '-c', "ps ax | grep 'clickhouse' | grep -v 'grep' | grep -v 'coproc' | grep -v 'bash -c' | awk '{print $1}'"] Command:[docker exec roottesthedgedrequestsparallel-gw3-node-1 bash -c ps ax | grep 'clickhouse' | grep -v 'grep' | grep -v 'coproc' | grep -v 'bash -c' | awk '{print $1}'] Stdout:8 Executing query SELECT * FROM test.simple FORMAT Protobuf SETTINGS format_schema='simple:KeyValuePair' on instance via HTTP interface Starting new HTTP connection (1): 172.16.7.2:8123 http://172.16.7.2:8123 "GET /?query=SELECT+%2A+FROM+test.simple+FORMAT+Protobuf+SETTINGS+format_schema%3D%27simple%3AKeyValuePair%27 HTTP/1.1" 200 None Command:[docker compose --env-file /ClickHouse/tests/integration/test_format_schema_on_server/_instances-0-gw6/.env --project-name roottestformatschemaonserver-gw6 --file /ClickHouse/tests/integration/test_format_schema_on_server/_instances-0-gw6/instance/docker-compose.yml stop --timeout 20] [gw6] PASSED test_format_schema_on_server/test.py::test_protobuf_format_output [gw4] PASSED test_keeper_client/test.py::test_find_super_nodes Executing query DROP TABLE IF EXISTS test.graphite; CREATE TABLE test.graphite (metric String, value Float64, timestamp UInt32, date Date, updated UInt32) ENGINE = GraphiteMergeTree('graphite_rollup') PARTITION BY toYYYYMM(date) ORDER BY (metric, timestamp) SETTINGS index_granularity=8192; on instance test_graphite_merge_tree/test.py::test_path_dangling_pointer Executing query SYSTEM FLUSH DISTRIBUTED dist on n2 Executing query INSERT INTO TABLE FUNCTION file( 'file1.csv', 'CSV', 's String, i UInt32') VALUES ('file1',1) on s0_1_0 run container_id:roottestencrypteddisk-gw1-node-1 detach:False nothrow:False cmd: ['bash', '-c', 'cat > /etc/clickhouse-server/config.d/storage_policy_encrypted_policy_multikeys.xml << EOF\n\n\n \n \n encrypted\n disk_local\n encrypted_policy_multikeys_dir/\n \n secondsecondseco\n firstfirstfirstf\n 1\n \n \n \n \n \n \n
\n encrypted_policy_multikeys_disk\n
\n
\n
\n
\n
\n
\nEOF'] Command:[docker exec roottestencrypteddisk-gw1-node-1 bash -c cat > /etc/clickhouse-server/config.d/storage_policy_encrypted_policy_multikeys.xml << EOF encrypted disk_local encrypted_policy_multikeys_dir/ secondsecondseco firstfirstfirstf 1
encrypted_policy_multikeys_disk
EOF] test_keeper_client/test.py::test_four_letter_word_commands get_instance_ip instance_name=zoo1 http://localhost:None "GET /v1.46/containers/roottestkeeperclient-gw4-zoo1-1/json HTTP/1.1" 200 None Executing query SYSTEM RELOAD CONFIG on node [gw4] PASSED test_keeper_client/test.py::test_four_letter_word_commands test_keeper_client/test.py::test_get_all_children_number get_instance_ip instance_name=zoo1 http://localhost:None "GET /v1.46/containers/roottestkeeperclient-gw4-zoo1-1/json HTTP/1.1" 200 None run container_id:roottestinsertdistributedasyncsend-gw0-n2-1 detach:False nothrow:False cmd: ['bash', '-c', 'ls /var/lib/clickhouse/data/default/dist/shard1_replica2/broken/2.bin'] Command:[docker exec roottestinsertdistributedasyncsend-gw0-n2-1 bash -c ls /var/lib/clickhouse/data/default/dist/shard1_replica2/broken/2.bin] Executing query INSERT INTO TABLE FUNCTION file( 'file2.csv', 'CSV', 's String, i UInt32') VALUES ('file2',2) on s0_1_0 Executing query select Settings from system.query_log where type = 'QueryFinish' and query_id = 'test_query_test_user_4' FORMAT JSON on node Executing query DROP TABLE IF EXISTS test.graphite2; CREATE TABLE test.graphite2 (metric String, value Float64, timestamp UInt32, date Date, updated UInt32) ENGINE = GraphiteMergeTree('graphite_rollup') PARTITION BY toYYYYMM(date) ORDER BY (metric, timestamp) SETTINGS index_granularity=1; on instance Stdout:/var/lib/clickhouse/data/default/dist/shard1_replica2/broken/2.bin Executing query SELECT count() FROM data on n1 [gw4] PASSED test_keeper_client/test.py::test_get_all_children_number Stderr: Container roottestgrpcprotocolssl-gw9-node-1 Stopping Stderr: Container roottestgrpcprotocolssl-gw9-node-1 Stopped Command:[bash -c [ -f /ClickHouse/tests/integration/test_grpc_protocol_ssl/_instances-0-gw9/node/logs/stderr.log ] && zgrep -aH "==================" /ClickHouse/tests/integration/test_grpc_protocol_ssl/_instances-0-gw9/node/logs/stderr.log* | ( [ -z "" ] && cat || grep -v "$" ) || true] Command:[docker compose --env-file /ClickHouse/tests/integration/test_grpc_protocol_ssl/_instances-0-gw9/.env --project-name roottestgrpcprotocolssl-gw9 --file /ClickHouse/tests/integration/test_grpc_protocol_ssl/_instances-0-gw9/node/docker-compose.yml down --volumes] test_keeper_client/test.py::test_quoted_argument_parsing get_instance_ip instance_name=zoo1 http://localhost:None "GET /v1.46/containers/roottestkeeperclient-gw4-zoo1-1/json HTTP/1.1" 200 None Executing query SELECT * FROM encrypted_test ORDER BY id FORMAT Values on node Executing query SELECT count(*) from file('file{1,2}.csv', 'CSV', 's String, i UInt32') on s0_0_0 [gw4] PASSED test_keeper_client/test.py::test_quoted_argument_parsing [gw7] PASSED test_external_http_authenticator/test.py::test_session_settings_from_auth_response test_external_http_authenticator/test.py::test_user_create_basic_auth_pass Executing query CREATE USER basic_user IDENTIFIED WITH HTTP SERVER 'basic_server' SCHEME 'BASIC' on node test_keeper_client/test.py::test_rm_with_version get_instance_ip instance_name=zoo1 http://localhost:None "GET /v1.46/containers/roottestkeeperclient-gw4-zoo1-1/json HTTP/1.1" 200 None Executing query SELECT count() FROM data on n2 Executing query INSERT INTO test.graphite2 FORMAT TSV on instance [gw4] PASSED test_keeper_client/test.py::test_rm_with_version get_instance_ip instance_name=zoo1 test_keeper_client/test.py::test_rm_without_version http://localhost:None "GET /v1.46/containers/roottestkeeperclient-gw4-zoo1-1/json HTTP/1.1" 200 None run container_id:roottestencrypteddisk-gw1-node-1 detach:False nothrow:False cmd: ['bash', '-c', 'cat > /etc/clickhouse-server/config.d/storage_policy_encrypted_policy_multikeys.xml << EOF\n\n\n \n \n encrypted\n disk_local\n encrypted_policy_multikeys_dir/\n \n secondsecondseco\n wrongwrongwrongw\n secondsecondseco\n \n \n \n \n \n \n
\n encrypted_policy_multikeys_disk\n
\n
\n
\n
\n
\n
\nEOF'] Command:[docker exec roottestencrypteddisk-gw1-node-1 bash -c cat > /etc/clickhouse-server/config.d/storage_policy_encrypted_policy_multikeys.xml << EOF encrypted disk_local encrypted_policy_multikeys_dir/ secondsecondseco wrongwrongwrongw secondsecondseco
encrypted_policy_multikeys_disk
EOF] Executing query SHOW CREATE USER basic_user on node Executing query SELECT count(*) from fileCluster('my_cluster', 'file{1,2}.csv', 'CSV', 's String, i UInt32') on s0_0_0 Executing query SYSTEM RELOAD CONFIG on node [gw4] PASSED test_keeper_client/test.py::test_rm_without_version [gw0] PASSED test_insert_distributed_async_send/test.py::test_insert_distributed_async_send_corrupted_big[0] test_insert_distributed_async_send/test.py::test_insert_distributed_async_send_corrupted_big[1] Executing query DROP TABLE IF EXISTS data on n1 get_instance_ip instance_name=zoo1 test_keeper_client/test.py::test_set_with_version http://localhost:None "GET /v1.46/containers/roottestkeeperclient-gw4-zoo1-1/json HTTP/1.1" 200 None [gw4] PASSED test_keeper_client/test.py::test_set_with_version run container_id:roottesthedgedrequestsparallel-gw3-node-1 detach:False nothrow:False cmd: ['bash', '-c', "ps ax | grep 'clickhouse' | grep -v 'grep' | grep -v 'coproc' | grep -v 'bash -c' | awk '{print $1}'"] Command:[docker exec roottesthedgedrequestsparallel-gw3-node-1 bash -c ps ax | grep 'clickhouse' | grep -v 'grep' | grep -v 'coproc' | grep -v 'bash -c' | awk '{print $1}'] Executing query SELECT currentUser() on node get_instance_ip instance_name=zoo1 test_keeper_client/test.py::test_set_without_version http://localhost:None "GET /v1.46/containers/roottestkeeperclient-gw4-zoo1-1/json HTTP/1.1" 200 None Stdout:8 [gw2] PASSED test_file_cluster/test.py::test_count test_file_cluster/test.py::test_format_detection Executing query INSERT INTO TABLE FUNCTION file( 'file_for_format_detection_1', 'CSV', 's String, i UInt32') VALUES ('file1',1) on s0_0_0 Executing query SELECT * FROM encrypted_test ORDER BY id FORMAT Values on node Executing query DROP TABLE IF EXISTS dist on n1 Stderr: Container roottestgrpcprotocolssl-gw9-node-1 Stopping Stderr: Container roottestgrpcprotocolssl-gw9-node-1 Stopped Stderr: Container roottestgrpcprotocolssl-gw9-node-1 Removing Stderr: Container roottestgrpcprotocolssl-gw9-node-1 Removed Stderr: Network roottestgrpcprotocolssl-gw9_default Removing Stderr: Network roottestgrpcprotocolssl-gw9_default Removed Cleanup called Docker networks for project roottestgrpcprotocolssl-gw9 are NETWORK ID NAME DRIVER SCOPE [gw4] PASSED test_keeper_client/test.py::test_set_without_version Docker containers for project roottestgrpcprotocolssl-gw9 are CONTAINER ID IMAGE COMMAND CREATED STATUS PORTS NAMES Docker volumes for project roottestgrpcprotocolssl-gw9 are DRIVER VOLUME NAME Command:[docker container list --all --filter name='^/roottestgrpcprotocolssl-gw9-.*-1$' --format '{{.ID}}:{{.Names}}'] Unstopped containers: {} No running containers for project: roottestgrpcprotocolssl-gw9 Trying to prune unused networks... Command:[docker compose --env-file /ClickHouse/tests/integration/test_keeper_client/_instances-0-gw4/.env --project-name roottestkeeperclient-gw4 --file /ClickHouse/tests/integration/test_keeper_client/_instances-0-gw4/node/docker-compose.yml --file /ClickHouse/tests/integration/helpers/../../../tests/integration/compose/docker_compose_keeper.yml stop --timeout 20] Trying to prune unused images... Command:[docker image prune -f] Executing query DROP USER basic_user on node Executing query INSERT INTO test.graphite2 FORMAT TSV on instance Stdout:Total reclaimed space: 0B Images pruned Trying to prune unused volumes... Command:[docker volume ls | wc -l] Stdout:3 Command:[docker volume prune -f] Stdout:Total reclaimed space: 0B Volumes pruned: 3 test_fetch_partition_from_auxiliary_zookeeper/test.py::test_fetch_part_from_allowed_zookeeper[PART-2020-08-28-20200828_0_0_0] Running tests in /ClickHouse/tests/integration/test_fetch_partition_from_auxiliary_zookeeper/test.py Executing query INSERT INTO TABLE FUNCTION file( 'file_for_format_detection_2', 'CSV', 's String, i UInt32') VALUES ('file2',2) on s0_0_0 Cluster start called. is_up=False Docker networks for project roottestfetchpartitionfromauxiliaryzookeeper-gw9 are NETWORK ID NAME DRIVER SCOPE Docker containers for project roottestfetchpartitionfromauxiliaryzookeeper-gw9 are CONTAINER ID IMAGE COMMAND CREATED STATUS PORTS NAMES Executing query DROP TABLE IF EXISTS data on n2 Docker volumes for project roottestfetchpartitionfromauxiliaryzookeeper-gw9 are DRIVER VOLUME NAME Cleanup called Docker networks for project roottestfetchpartitionfromauxiliaryzookeeper-gw9 are NETWORK ID NAME DRIVER SCOPE Docker containers for project roottestfetchpartitionfromauxiliaryzookeeper-gw9 are CONTAINER ID IMAGE COMMAND CREATED STATUS PORTS NAMES Docker volumes for project roottestfetchpartitionfromauxiliaryzookeeper-gw9 are DRIVER VOLUME NAME Command:[docker container list --all --filter name='^/roottestfetchpartitionfromauxiliaryzookeeper-gw9-.*-1$' --format '{{.ID}}:{{.Names}}'] Unstopped containers: {} No running containers for project: roottestfetchpartitionfromauxiliaryzookeeper-gw9 Trying to prune unused networks... Executing query ALTER TABLE encrypted_test DETACH PART 'all_1_1_0' on node Trying to prune unused images... Command:[docker image prune -f] [gw7] PASSED test_external_http_authenticator/test.py::test_user_create_basic_auth_pass test_external_http_authenticator/test.py::test_user_from_config_basic_auth_pass Executing query SHOW CREATE USER good_user on node Stdout:Total reclaimed space: 0B Images pruned Trying to prune unused volumes... Command:[docker volume ls | wc -l] Stdout:3 Command:[docker volume prune -f] Stdout:Total reclaimed space: 0B Volumes pruned: 3 Setup directory for instance: node Create directory for configuration generated in this helper Create directory for common tests configuration Copy common configuration from helpers Generate and write macros file Copy custom test config files ['/ClickHouse/tests/integration/test_fetch_partition_from_auxiliary_zookeeper/configs/zookeeper_config.xml'] to /ClickHouse/tests/integration/test_fetch_partition_from_auxiliary_zookeeper/_instances-0-gw9/node/configs/config.d Setup database dir /ClickHouse/tests/integration/test_fetch_partition_from_auxiliary_zookeeper/_instances-0-gw9/node/database Setup logs dir /ClickHouse/tests/integration/test_fetch_partition_from_auxiliary_zookeeper/_instances-0-gw9/node/logs Entrypoint cmd: ["clickhouse", "server", "--config-file=/etc/clickhouse-server/config.xml", "--log-file=/var/log/clickhouse-server/clickhouse-server.log", "--errorlog-file=/var/log/clickhouse-server/clickhouse-server.err.log", "--"] Env {'ASAN_OPTIONS': 'use_sigaltstack=0', 'TSAN_OPTIONS': 'use_sigaltstack=0', 'CLICKHOUSE_WATCHDOG_ENABLE': '0', 'CLICKHOUSE_NATS_TLS_SECURE': '0', 'LLVM_PROFILE_FILE': '/var/lib/clickhouse/server_%h_%p_%m.profraw', 'keeper_binary': '/clickhouse', 'keeper_cmd_prefix': 'clickhouse keeper', 'image': 'altinityinfra/integration-test:8b2301119731', 'user': '0', 'keeper_fs': 'bind', 'keeper_logs_dir1': '/ClickHouse/tests/integration/test_fetch_partition_from_auxiliary_zookeeper/_instances-0-gw9/keeper1/log', 'keeper_config_dir1': '/ClickHouse/tests/integration/test_fetch_partition_from_auxiliary_zookeeper/_instances-0-gw9/keeper1/config', 'keeper_db_dir1': '/ClickHouse/tests/integration/test_fetch_partition_from_auxiliary_zookeeper/_instances-0-gw9/keeper1/coordination', 'keeper_logs_dir2': '/ClickHouse/tests/integration/test_fetch_partition_from_auxiliary_zookeeper/_instances-0-gw9/keeper2/log', 'keeper_config_dir2': '/ClickHouse/tests/integration/test_fetch_partition_from_auxiliary_zookeeper/_instances-0-gw9/keeper2/config', 'keeper_db_dir2': '/ClickHouse/tests/integration/test_fetch_partition_from_auxiliary_zookeeper/_instances-0-gw9/keeper2/coordination', 'keeper_logs_dir3': '/ClickHouse/tests/integration/test_fetch_partition_from_auxiliary_zookeeper/_instances-0-gw9/keeper3/log', 'keeper_config_dir3': '/ClickHouse/tests/integration/test_fetch_partition_from_auxiliary_zookeeper/_instances-0-gw9/keeper3/config', 'keeper_db_dir3': '/ClickHouse/tests/integration/test_fetch_partition_from_auxiliary_zookeeper/_instances-0-gw9/keeper3/coordination'} stored in /ClickHouse/tests/integration/test_fetch_partition_from_auxiliary_zookeeper/_instances-0-gw9/.env Trying paths: ['/root/.docker/config.json', '/root/.dockercfg'] No config file found Trying paths: ['/root/.docker/config.json', '/root/.dockercfg'] No config file found http://localhost:None "GET /version HTTP/1.1" 200 826 Command:[docker compose --env-file /ClickHouse/tests/integration/test_fetch_partition_from_auxiliary_zookeeper/_instances-0-gw9/.env --project-name roottestfetchpartitionfromauxiliaryzookeeper-gw9 --file /ClickHouse/tests/integration/test_fetch_partition_from_auxiliary_zookeeper/_instances-0-gw9/node/docker-compose.yml --file /ClickHouse/tests/integration/helpers/../../../tests/integration/compose/docker_compose_keeper.yml pull] Executing query INSERT INTO TABLE FUNCTION file( 'file_for_format_detection_1', 'CSV', 's String, i UInt32') VALUES ('file1',1) on s0_0_1 Executing query SELECT * FROM encrypted_test ORDER BY id FORMAT Values on node Executing query DROP TABLE IF EXISTS dist on n2 Executing query SELECT currentUser() on node Executing query INSERT INTO test.graphite2 FORMAT TSV on instance Executing query DROP TABLE IF EXISTS encrypted_test SYNC on node [gw1] PASSED test_encrypted_disk/test.py::test_add_keys Executing query DROP TABLE IF EXISTS data on n3 Executing query INSERT INTO TABLE FUNCTION file( 'file_for_format_detection_2', 'CSV', 's String, i UInt32') VALUES ('file2',2) on s0_0_1 Command:[docker compose --env-file /ClickHouse/tests/integration/test_external_http_authenticator/_instances-0-gw7/.env --project-name roottestexternalhttpauthenticator-gw7 --file /ClickHouse/tests/integration/test_external_http_authenticator/_instances-0-gw7/node/docker-compose.yml stop --timeout 20] [gw7] PASSED test_external_http_authenticator/test.py::test_user_from_config_basic_auth_pass run container_id:roottesthedgedrequestsparallel-gw3-node-1 detach:False nothrow:False cmd: ['bash', '-c', "ps ax | grep 'clickhouse' | grep -v 'grep' | grep -v 'coproc' | grep -v 'bash -c' | awk '{print $1}'"] Command:[docker exec roottesthedgedrequestsparallel-gw3-node-1 bash -c ps ax | grep 'clickhouse' | grep -v 'grep' | grep -v 'coproc' | grep -v 'bash -c' | awk '{print $1}'] test_encrypted_disk/test.py::test_add_keys_with_id Executing query SELECT policy_name FROM system.storage_policies on node Executing query DROP TABLE IF EXISTS dist on n3 Executing query INSERT INTO TABLE FUNCTION file( 'file_for_format_detection_1', 'CSV', 's String, i UInt32') VALUES ('file1',1) on s0_1_0 run container_id:roottesthedgedrequestsparallel-gw3-node-1 detach:False nothrow:False cmd: ['bash', '-c', "ps ax | grep 'clickhouse' | grep -v 'grep' | grep -v 'coproc' | grep -v 'bash -c' | awk '{print $1}'"] Command:[docker exec roottesthedgedrequestsparallel-gw3-node-1 bash -c ps ax | grep 'clickhouse' | grep -v 'grep' | grep -v 'coproc' | grep -v 'bash -c' | awk '{print $1}'] No clickhouse process running. Start new one. http://localhost:None "POST /v1.46/containers/roottesthedgedrequestsparallel-gw3-node-1/exec HTTP/1.1" 201 74 http://localhost:None "POST /v1.46/exec/118c0d654ee30e81c9ab29c66ee4d7de46727952287cce55a47ce50a14157ca6/start HTTP/1.1" 200 0 http://localhost:None "GET /v1.46/exec/118c0d654ee30e81c9ab29c66ee4d7de46727952287cce55a47ce50a14157ca6/json HTTP/1.1" 200 586 Executing query INSERT INTO test.graphite2 FORMAT TSV on instance Executing query INSERT INTO TABLE FUNCTION file( 'file_for_format_detection_2', 'CSV', 's String, i UInt32') VALUES ('file2',2) on s0_1_0 Executing query DROP TABLE IF EXISTS data on n4 run container_id:roottestencrypteddisk-gw1-node-1 detach:False nothrow:False cmd: ['bash', '-c', 'cat > /etc/clickhouse-server/config.d/storage_policy_encrypted_policy_multikeys.xml << EOF\n\n\n \n \n encrypted\n disk_local\n encrypted_policy_multikeys_dir/\n firstfirstfirstf\n \n \n \n \n \n
\n encrypted_policy_multikeys_disk\n
\n
\n
\n
\n
\n
\nEOF'] Command:[docker exec roottestencrypteddisk-gw1-node-1 bash -c cat > /etc/clickhouse-server/config.d/storage_policy_encrypted_policy_multikeys.xml << EOF encrypted disk_local encrypted_policy_multikeys_dir/ firstfirstfirstf
encrypted_policy_multikeys_disk
EOF] Executing query SYSTEM RELOAD CONFIG on node Stderr: Container roottestkeeperclient-gw4-node-1 Stopping Stderr: Container roottestkeeperclient-gw4-node-1 Stopped Stderr: Container roottestkeeperclient-gw4-zoo1-1 Stopping Stderr: Container roottestkeeperclient-gw4-zoo2-1 Stopping Stderr: Container roottestkeeperclient-gw4-zoo3-1 Stopping Stderr: Container roottestkeeperclient-gw4-zoo2-1 Stopped Stderr: Container roottestkeeperclient-gw4-zoo1-1 Stopped Stderr: Container roottestkeeperclient-gw4-zoo3-1 Stopped Command:[bash -c [ -f /ClickHouse/tests/integration/test_keeper_client/_instances-0-gw4/node/logs/stderr.log ] && zgrep -aH "==================" /ClickHouse/tests/integration/test_keeper_client/_instances-0-gw4/node/logs/stderr.log* | ( [ -z "" ] && cat || grep -v "$" ) || true] Command:[docker compose --env-file /ClickHouse/tests/integration/test_keeper_client/_instances-0-gw4/.env --project-name roottestkeeperclient-gw4 --file /ClickHouse/tests/integration/test_keeper_client/_instances-0-gw4/node/docker-compose.yml --file /ClickHouse/tests/integration/helpers/../../../tests/integration/compose/docker_compose_keeper.yml down --volumes] Executing query select * from file('file_for_format_detection*', 'CSV', 's String, i UInt32') ORDER BY (i, s) on s0_0_0 Executing query DROP TABLE IF EXISTS dist on n4 Stderr: Container roottestjbodloadbalancing-gw8-node-1 Stopping Stderr: Container roottestjbodloadbalancing-gw8-node-1 Stopped Command:[bash -c [ -f /ClickHouse/tests/integration/test_jbod_load_balancing/_instances-0-gw8/node/logs/stderr.log ] && zgrep -aH "==================" /ClickHouse/tests/integration/test_jbod_load_balancing/_instances-0-gw8/node/logs/stderr.log* | ( [ -z "" ] && cat || grep -v "$" ) || true] Command:[docker compose --env-file /ClickHouse/tests/integration/test_jbod_load_balancing/_instances-0-gw8/.env --project-name roottestjbodloadbalancing-gw8 --file /ClickHouse/tests/integration/test_jbod_load_balancing/_instances-0-gw8/node/docker-compose.yml down --volumes] Executing query SELECT policy_name FROM system.storage_policies WHERE policy_name='encrypted_policy_multikeys' on node Executing query INSERT INTO test.graphite2 FORMAT TSV on instance Executing query select * from fileCluster('my_cluster', 'file_for_format_detection*') ORDER BY (c1, c2) on s0_0_0 Executing query CREATE TABLE data (key Int, value String) Engine=MergeTree() ORDER BY key on n1 Executing query CREATE TABLE encrypted_test ( id Int64, data String ) ENGINE=MergeTree() ORDER BY id SETTINGS storage_policy='encrypted_policy_multikeys' on node run container_id:roottesthedgedrequestsparallel-gw3-node-1 detach:False nothrow:False cmd: ['bash', '-c', "ps ax | grep 'clickhouse' | grep -v 'grep' | grep -v 'coproc' | grep -v 'bash -c' | awk '{print $1}'"] Command:[docker exec roottesthedgedrequestsparallel-gw3-node-1 bash -c ps ax | grep 'clickhouse' | grep -v 'grep' | grep -v 'coproc' | grep -v 'bash -c' | awk '{print $1}'] Stdout:751 Clickhouse process running. run container_id:roottesthedgedrequestsparallel-gw3-node-1 detach:False nothrow:False cmd: ['bash', '-c', "ps ax | grep 'clickhouse' | grep -v 'grep' | grep -v 'coproc' | grep -v 'bash -c' | awk '{print $1}'"] Command:[docker exec roottesthedgedrequestsparallel-gw3-node-1 bash -c ps ax | grep 'clickhouse' | grep -v 'grep' | grep -v 'coproc' | grep -v 'bash -c' | awk '{print $1}'] Executing query CREATE TABLE dist AS data Engine=Distributed( insert_distributed_async_send_cluster_two_replicas, currentDatabase(), data, key ) on n1 Stdout:751 Executing query select 20 on node Executing query select * from fileCluster('my_cluster', 'file_for_format_detection*', auto) ORDER BY (c1, c2) on s0_0_0 Executing query INSERT INTO encrypted_test VALUES (0,'data'),(1,'data') on node Stderr: Container roottestkeeperclient-gw4-node-1 Stopping Stderr: Container roottestkeeperclient-gw4-node-1 Stopped Stderr: Container roottestkeeperclient-gw4-node-1 Removing Stderr: Container roottestkeeperclient-gw4-node-1 Removed Stderr: Container roottestkeeperclient-gw4-zoo3-1 Stopping Stderr: Container roottestkeeperclient-gw4-zoo1-1 Stopping Stderr: Container roottestkeeperclient-gw4-zoo2-1 Stopping Stderr: Container roottestkeeperclient-gw4-zoo1-1 Stopped Stderr: Container roottestkeeperclient-gw4-zoo1-1 Removing Stderr: Container roottestkeeperclient-gw4-zoo2-1 Stopped Stderr: Container roottestkeeperclient-gw4-zoo2-1 Removing Stderr: Container roottestkeeperclient-gw4-zoo3-1 Stopped Stderr: Container roottestkeeperclient-gw4-zoo3-1 Removing Stderr: Container roottestkeeperclient-gw4-zoo2-1 Removed Stderr: Container roottestkeeperclient-gw4-zoo3-1 Removed Stderr: Container roottestkeeperclient-gw4-zoo1-1 Removed Stderr: Network roottestkeeperclient-gw4_default Removing Stderr: Network roottestkeeperclient-gw4_default Removed Cleanup called Docker networks for project roottestkeeperclient-gw4 are NETWORK ID NAME DRIVER SCOPE Executing query SYSTEM STOP DISTRIBUTED SENDS dist on n1 Docker containers for project roottestkeeperclient-gw4 are CONTAINER ID IMAGE COMMAND CREATED STATUS PORTS NAMES Docker volumes for project roottestkeeperclient-gw4 are DRIVER VOLUME NAME Command:[docker container list --all --filter name='^/roottestkeeperclient-gw4-.*-1$' --format '{{.ID}}:{{.Names}}'] Executing query INSERT INTO test.graphite2 FORMAT TSV on instance Unstopped containers: {} No running containers for project: roottestkeeperclient-gw4 Trying to prune unused networks... Executing query select * from fileCluster('my_cluster', 'file_for_format_detection*', auto, auto) ORDER BY (c1, c2) on s0_0_0 Trying to prune unused images... Command:[docker image prune -f] Stdout:Total reclaimed space: 0B Images pruned Trying to prune unused volumes... Command:[docker volume ls | wc -l] Stdout:3 Command:[docker volume prune -f] Stdout:Total reclaimed space: 0B Volumes pruned: 3 test_jbod_ha/test.py::test_jbod_ha Running tests in /ClickHouse/tests/integration/test_jbod_ha/test.py Cluster start called. is_up=False Docker networks for project roottestjbodha-gw4 are NETWORK ID NAME DRIVER SCOPE Docker containers for project roottestjbodha-gw4 are CONTAINER ID IMAGE COMMAND CREATED STATUS PORTS NAMES Executing query CREATE TABLE data (key Int, value String) Engine=MergeTree() ORDER BY key on n2 Docker volumes for project roottestjbodha-gw4 are DRIVER VOLUME NAME Cleanup called Docker networks for project roottestjbodha-gw4 are NETWORK ID NAME DRIVER SCOPE Docker containers for project roottestjbodha-gw4 are CONTAINER ID IMAGE COMMAND CREATED STATUS PORTS NAMES Docker volumes for project roottestjbodha-gw4 are DRIVER VOLUME NAME Command:[docker container list --all --filter name='^/roottestjbodha-gw4-.*-1$' --format '{{.ID}}:{{.Names}}'] Executing query SELECT * FROM encrypted_test ORDER BY id FORMAT Values on node Unstopped containers: {} No running containers for project: roottestjbodha-gw4 Trying to prune unused networks... Trying to prune unused images... Command:[docker image prune -f] Stdout:Total reclaimed space: 0B Images pruned Trying to prune unused volumes... Command:[docker volume ls | wc -l] Stdout:3 Command:[docker volume prune -f] Executing query select * from fileCluster('my_cluster', 'file_for_format_detection*', auto, 's String, i UInt32') ORDER BY (i, s) on s0_0_0 Stdout:Total reclaimed space: 0B Volumes pruned: 3 Setup directory for instance: node1 Create directory for configuration generated in this helper Create directory for common tests configuration Copy common configuration from helpers Generate and write macros file Copy custom test config files ['/ClickHouse/tests/integration/test_jbod_ha/configs/config.d/storage_configuration.xml'] to /ClickHouse/tests/integration/test_jbod_ha/_instances-0-gw4/node1/configs/config.d Setup database dir /ClickHouse/tests/integration/test_jbod_ha/_instances-0-gw4/node1/database Setup logs dir /ClickHouse/tests/integration/test_jbod_ha/_instances-0-gw4/node1/logs Entrypoint cmd: bash -c "trap 'pkill tail' INT TERM; clickhouse server --config-file=/etc/clickhouse-server/config.xml --log-file=/var/log/clickhouse-server/clickhouse-server.log --errorlog-file=/var/log/clickhouse-server/clickhouse-server.err.log --daemon -- ; coproc tail -f /dev/null; wait $$!" Setup directory for instance: node2 Create directory for configuration generated in this helper Create directory for common tests configuration Copy common configuration from helpers Generate and write macros file Copy custom test config files ['/ClickHouse/tests/integration/test_jbod_ha/configs/config.d/storage_configuration.xml'] to /ClickHouse/tests/integration/test_jbod_ha/_instances-0-gw4/node2/configs/config.d Setup database dir /ClickHouse/tests/integration/test_jbod_ha/_instances-0-gw4/node2/database Setup logs dir /ClickHouse/tests/integration/test_jbod_ha/_instances-0-gw4/node2/logs Entrypoint cmd: bash -c "trap 'pkill tail' INT TERM; clickhouse server --config-file=/etc/clickhouse-server/config.xml --log-file=/var/log/clickhouse-server/clickhouse-server.log --errorlog-file=/var/log/clickhouse-server/clickhouse-server.err.log --daemon -- ; coproc tail -f /dev/null; wait $$!" Env {'ASAN_OPTIONS': 'use_sigaltstack=0', 'TSAN_OPTIONS': 'use_sigaltstack=0', 'CLICKHOUSE_WATCHDOG_ENABLE': '0', 'CLICKHOUSE_NATS_TLS_SECURE': '0', 'LLVM_PROFILE_FILE': '/var/lib/clickhouse/server_%h_%p_%m.profraw', 'keeper_binary': '/clickhouse', 'keeper_cmd_prefix': 'clickhouse keeper', 'image': 'altinityinfra/integration-test:8b2301119731', 'user': '0', 'keeper_fs': 'bind', 'keeper_logs_dir1': '/ClickHouse/tests/integration/test_jbod_ha/_instances-0-gw4/keeper1/log', 'keeper_config_dir1': '/ClickHouse/tests/integration/test_jbod_ha/_instances-0-gw4/keeper1/config', 'keeper_db_dir1': '/ClickHouse/tests/integration/test_jbod_ha/_instances-0-gw4/keeper1/coordination', 'keeper_logs_dir2': '/ClickHouse/tests/integration/test_jbod_ha/_instances-0-gw4/keeper2/log', 'keeper_config_dir2': '/ClickHouse/tests/integration/test_jbod_ha/_instances-0-gw4/keeper2/config', 'keeper_db_dir2': '/ClickHouse/tests/integration/test_jbod_ha/_instances-0-gw4/keeper2/coordination', 'keeper_logs_dir3': '/ClickHouse/tests/integration/test_jbod_ha/_instances-0-gw4/keeper3/log', 'keeper_config_dir3': '/ClickHouse/tests/integration/test_jbod_ha/_instances-0-gw4/keeper3/config', 'keeper_db_dir3': '/ClickHouse/tests/integration/test_jbod_ha/_instances-0-gw4/keeper3/coordination'} stored in /ClickHouse/tests/integration/test_jbod_ha/_instances-0-gw4/.env Trying paths: ['/root/.docker/config.json', '/root/.dockercfg'] No config file found Trying paths: ['/root/.docker/config.json', '/root/.dockercfg'] No config file found http://localhost:None "GET /version HTTP/1.1" 200 826 Command:[docker compose --env-file /ClickHouse/tests/integration/test_jbod_ha/_instances-0-gw4/.env --project-name roottestjbodha-gw4 --file /ClickHouse/tests/integration/test_jbod_ha/_instances-0-gw4/node1/docker-compose.yml --file /ClickHouse/tests/integration/helpers/../../../tests/integration/compose/docker_compose_keeper.yml --file /ClickHouse/tests/integration/test_jbod_ha/_instances-0-gw4/node2/docker-compose.yml pull] Stderr: Container roottestexternalhttpauthenticator-gw7-node-1 Stopping Stderr: Container roottestexternalhttpauthenticator-gw7-node-1 Stopped Command:[bash -c [ -f /ClickHouse/tests/integration/test_external_http_authenticator/_instances-0-gw7/node/logs/stderr.log ] && zgrep -aH "==================" /ClickHouse/tests/integration/test_external_http_authenticator/_instances-0-gw7/node/logs/stderr.log* | ( [ -z "" ] && cat || grep -v "$" ) || true] Executing query select 20 on node Command:[docker compose --env-file /ClickHouse/tests/integration/test_external_http_authenticator/_instances-0-gw7/.env --project-name roottestexternalhttpauthenticator-gw7 --file /ClickHouse/tests/integration/test_external_http_authenticator/_instances-0-gw7/node/docker-compose.yml down --volumes] Executing query CREATE TABLE dist AS data Engine=Distributed( insert_distributed_async_send_cluster_two_replicas, currentDatabase(), data, key ) on n2 run container_id:roottestencrypteddisk-gw1-node-1 detach:False nothrow:False cmd: ['bash', '-c', 'cat > /etc/clickhouse-server/config.d/storage_policy_encrypted_policy_multikeys.xml << EOF\n\n\n \n \n encrypted\n disk_local\n encrypted_policy_multikeys_dir/\n \n firstfirstfirstf\n secondsecondseco\n 1\n \n \n \n \n \n \n
\n encrypted_policy_multikeys_disk\n
\n
\n
\n
\n
\n
\nEOF'] Command:[docker exec roottestencrypteddisk-gw1-node-1 bash -c cat > /etc/clickhouse-server/config.d/storage_policy_encrypted_policy_multikeys.xml << EOF encrypted disk_local encrypted_policy_multikeys_dir/ firstfirstfirstf secondsecondseco 1
encrypted_policy_multikeys_disk
EOF] Executing query select * from fileCluster('my_cluster', 'file_for_format_detection*', auto, auto, auto) ORDER BY (c1, c2) on s0_0_0 Executing query SYSTEM RELOAD CONFIG on node Stderr: Container roottestjbodloadbalancing-gw8-node-1 Stopping Executing query INSERT INTO test.graphite2 FORMAT TSV on instance Stderr: Container roottestjbodloadbalancing-gw8-node-1 Stopped Stderr: Container roottestjbodloadbalancing-gw8-node-1 Removing Stderr: Container roottestjbodloadbalancing-gw8-node-1 Removed Stderr: Network roottestjbodloadbalancing-gw8_default Removing Stderr: Network roottestjbodloadbalancing-gw8_default Removed Cleanup called Docker networks for project roottestjbodloadbalancing-gw8 are NETWORK ID NAME DRIVER SCOPE Docker containers for project roottestjbodloadbalancing-gw8 are CONTAINER ID IMAGE COMMAND CREATED STATUS PORTS NAMES Docker volumes for project roottestjbodloadbalancing-gw8 are DRIVER VOLUME NAME Command:[docker container list --all --filter name='^/roottestjbodloadbalancing-gw8-.*-1$' --format '{{.ID}}:{{.Names}}'] Executing query SYSTEM STOP DISTRIBUTED SENDS dist on n2 Unstopped containers: {} No running containers for project: roottestjbodloadbalancing-gw8 Trying to prune unused networks... Trying to prune unused images... Command:[docker image prune -f] Executing query SELECT count() FROM distributed on node Stdout:Total reclaimed space: 0B Images pruned Trying to prune unused volumes... Command:[docker volume ls | wc -l] Stdout:3 Command:[docker volume prune -f] Stdout:Total reclaimed space: 0B Volumes pruned: 3 test_drop_replica_with_auxiliary_zookeepers/test.py::test_drop_replica_in_auxiliary_zookeeper Running tests in /ClickHouse/tests/integration/test_drop_replica_with_auxiliary_zookeepers/test.py Cluster start called. is_up=False Docker networks for project roottestdropreplicawithauxiliaryzookeepers-gw8 are NETWORK ID NAME DRIVER SCOPE Docker containers for project roottestdropreplicawithauxiliaryzookeepers-gw8 are CONTAINER ID IMAGE COMMAND CREATED STATUS PORTS NAMES Docker volumes for project roottestdropreplicawithauxiliaryzookeepers-gw8 are DRIVER VOLUME NAME Cleanup called Docker networks for project roottestdropreplicawithauxiliaryzookeepers-gw8 are NETWORK ID NAME DRIVER SCOPE Docker containers for project roottestdropreplicawithauxiliaryzookeepers-gw8 are CONTAINER ID IMAGE COMMAND CREATED STATUS PORTS NAMES Executing query select * from fileCluster('my_cluster', 'file_for_format_detection*', auto, 's String, i UInt32', auto) ORDER BY (i, s) on s0_0_0 Docker volumes for project roottestdropreplicawithauxiliaryzookeepers-gw8 are DRIVER VOLUME NAME Command:[docker container list --all --filter name='^/roottestdropreplicawithauxiliaryzookeepers-gw8-.*-1$' --format '{{.ID}}:{{.Names}}'] Executing query CREATE TABLE data (key Int, value String) Engine=MergeTree() ORDER BY key on n3 Unstopped containers: {} No running containers for project: roottestdropreplicawithauxiliaryzookeepers-gw8 Trying to prune unused networks... Trying to prune unused images... Command:[docker image prune -f] Executing query INSERT INTO encrypted_test VALUES (2,'data'),(3,'data') on node Stdout:Total reclaimed space: 0B Images pruned Trying to prune unused volumes... Command:[docker volume ls | wc -l] Stdout:3 Command:[docker volume prune -f] Stdout:Total reclaimed space: 0B Volumes pruned: 3 Setup directory for instance: node1 Create directory for configuration generated in this helper Create directory for common tests configuration Copy common configuration from helpers Generate and write macros file Copy custom test config files ['/ClickHouse/tests/integration/test_drop_replica_with_auxiliary_zookeepers/configs/zookeeper_config.xml', '/ClickHouse/tests/integration/test_drop_replica_with_auxiliary_zookeepers/configs/remote_servers.xml'] to /ClickHouse/tests/integration/test_drop_replica_with_auxiliary_zookeepers/_instances-0-gw8/node1/configs/config.d Setup database dir /ClickHouse/tests/integration/test_drop_replica_with_auxiliary_zookeepers/_instances-0-gw8/node1/database Setup logs dir /ClickHouse/tests/integration/test_drop_replica_with_auxiliary_zookeepers/_instances-0-gw8/node1/logs Entrypoint cmd: bash -c "trap 'pkill tail' INT TERM; clickhouse server --config-file=/etc/clickhouse-server/config.xml --log-file=/var/log/clickhouse-server/clickhouse-server.log --errorlog-file=/var/log/clickhouse-server/clickhouse-server.err.log --daemon -- ; coproc tail -f /dev/null; wait $$!" Setup directory for instance: node2 Create directory for configuration generated in this helper Create directory for common tests configuration Copy common configuration from helpers Generate and write macros file Copy custom test config files ['/ClickHouse/tests/integration/test_drop_replica_with_auxiliary_zookeepers/configs/zookeeper_config.xml', '/ClickHouse/tests/integration/test_drop_replica_with_auxiliary_zookeepers/configs/remote_servers.xml'] to /ClickHouse/tests/integration/test_drop_replica_with_auxiliary_zookeepers/_instances-0-gw8/node2/configs/config.d Setup database dir /ClickHouse/tests/integration/test_drop_replica_with_auxiliary_zookeepers/_instances-0-gw8/node2/database Setup logs dir /ClickHouse/tests/integration/test_drop_replica_with_auxiliary_zookeepers/_instances-0-gw8/node2/logs Entrypoint cmd: bash -c "trap 'pkill tail' INT TERM; clickhouse server --config-file=/etc/clickhouse-server/config.xml --log-file=/var/log/clickhouse-server/clickhouse-server.log --errorlog-file=/var/log/clickhouse-server/clickhouse-server.err.log --daemon -- ; coproc tail -f /dev/null; wait $$!" Env {'ASAN_OPTIONS': 'use_sigaltstack=0', 'TSAN_OPTIONS': 'use_sigaltstack=0', 'CLICKHOUSE_WATCHDOG_ENABLE': '0', 'CLICKHOUSE_NATS_TLS_SECURE': '0', 'LLVM_PROFILE_FILE': '/var/lib/clickhouse/server_%h_%p_%m.profraw', 'ZK_FS': 'bind', 'ZK_DATA1': '/ClickHouse/tests/integration/test_drop_replica_with_auxiliary_zookeepers/_instances-0-gw8/zk1/data', 'ZK_DATA_LOG1': '/ClickHouse/tests/integration/test_drop_replica_with_auxiliary_zookeepers/_instances-0-gw8/zk1/log', 'ZK_DATA2': '/ClickHouse/tests/integration/test_drop_replica_with_auxiliary_zookeepers/_instances-0-gw8/zk2/data', 'ZK_DATA_LOG2': '/ClickHouse/tests/integration/test_drop_replica_with_auxiliary_zookeepers/_instances-0-gw8/zk2/log', 'ZK_DATA3': '/ClickHouse/tests/integration/test_drop_replica_with_auxiliary_zookeepers/_instances-0-gw8/zk3/data', 'ZK_DATA_LOG3': '/ClickHouse/tests/integration/test_drop_replica_with_auxiliary_zookeepers/_instances-0-gw8/zk3/log'} stored in /ClickHouse/tests/integration/test_drop_replica_with_auxiliary_zookeepers/_instances-0-gw8/.env Trying paths: ['/root/.docker/config.json', '/root/.dockercfg'] No config file found Trying paths: ['/root/.docker/config.json', '/root/.dockercfg'] No config file found http://localhost:None "GET /version HTTP/1.1" 200 826 Command:[docker compose --env-file /ClickHouse/tests/integration/test_drop_replica_with_auxiliary_zookeepers/_instances-0-gw8/.env --project-name roottestdropreplicawithauxiliaryzookeepers-gw8 --file /ClickHouse/tests/integration/test_drop_replica_with_auxiliary_zookeepers/_instances-0-gw8/node1/docker-compose.yml --file /ClickHouse/tests/integration/helpers/../../../tests/integration/compose/docker_compose_zookeeper.yml --file /ClickHouse/tests/integration/test_drop_replica_with_auxiliary_zookeepers/_instances-0-gw8/node2/docker-compose.yml pull] Stderr: Container roottestformatschemaonserver-gw6-instance-1 Stopping Stderr: Container roottestformatschemaonserver-gw6-instance-1 Stopped Command:[bash -c [ -f /ClickHouse/tests/integration/test_format_schema_on_server/_instances-0-gw6/instance/logs/stderr.log ] && zgrep -aH "==================" /ClickHouse/tests/integration/test_format_schema_on_server/_instances-0-gw6/instance/logs/stderr.log* | ( [ -z "" ] && cat || grep -v "$" ) || true] Command:[docker compose --env-file /ClickHouse/tests/integration/test_format_schema_on_server/_instances-0-gw6/.env --project-name roottestformatschemaonserver-gw6 --file /ClickHouse/tests/integration/test_format_schema_on_server/_instances-0-gw6/instance/docker-compose.yml down --volumes] Executing query CREATE TABLE dist AS data Engine=Distributed( insert_distributed_async_send_cluster_two_replicas, currentDatabase(), data, key ) on n3 Executing query SELECT * FROM encrypted_test ORDER BY id FORMAT Values on node [gw2] PASSED test_file_cluster/test.py::test_format_detection test_file_cluster/test.py::test_missing_file Executing query SELECT * from file('file{1,2,3}.csv', 'CSV', 's String, i UInt32') ORDER BY (i, s) on s0_0_0 Executing query INSERT INTO test.graphite2 FORMAT TSV on instance Stderr: Container roottestexternalhttpauthenticator-gw7-node-1 Stopping Stderr: Container roottestexternalhttpauthenticator-gw7-node-1 Stopped Stderr: Container roottestexternalhttpauthenticator-gw7-node-1 Removing Stderr: Container roottestexternalhttpauthenticator-gw7-node-1 Removed Stderr: Network roottestexternalhttpauthenticator-gw7_default Removing Stderr: Network roottestexternalhttpauthenticator-gw7_default Removed Cleanup called Docker networks for project roottestexternalhttpauthenticator-gw7 are NETWORK ID NAME DRIVER SCOPE Docker containers for project roottestexternalhttpauthenticator-gw7 are CONTAINER ID IMAGE COMMAND CREATED STATUS PORTS NAMES Docker volumes for project roottestexternalhttpauthenticator-gw7 are DRIVER VOLUME NAME Command:[docker container list --all --filter name='^/roottestexternalhttpauthenticator-gw7-.*-1$' --format '{{.ID}}:{{.Names}}'] Unstopped containers: {} No running containers for project: roottestexternalhttpauthenticator-gw7 Trying to prune unused networks... Trying to prune unused images... Command:[docker image prune -f] Executing query SYSTEM STOP DISTRIBUTED SENDS dist on n3 run container_id:roottestencrypteddisk-gw1-node-1 detach:False nothrow:False cmd: ['bash', '-c', 'cat > /etc/clickhouse-server/config.d/storage_policy_encrypted_policy_multikeys.xml << EOF\n\n\n \n \n encrypted\n disk_local\n encrypted_policy_multikeys_dir/\n \n secondsecondseco\n firstfirstfirstf\n 1\n \n \n \n \n \n \n
\n encrypted_policy_multikeys_disk\n
\n
\n
\n
\n
\n
\nEOF'] Command:[docker exec roottestencrypteddisk-gw1-node-1 bash -c cat > /etc/clickhouse-server/config.d/storage_policy_encrypted_policy_multikeys.xml << EOF encrypted disk_local encrypted_policy_multikeys_dir/ secondsecondseco firstfirstfirstf 1
encrypted_policy_multikeys_disk
EOF] Stdout:Total reclaimed space: 0B Images pruned Trying to prune unused volumes... Command:[docker volume ls | wc -l] Stdout:3 Command:[docker volume prune -f] Executing query SYSTEM RELOAD CONFIG on node Stdout:Total reclaimed space: 0B Volumes pruned: 3 test_input_format_parallel_parsing_memory_tracking/test.py::test_memory_tracking_total Running tests in /ClickHouse/tests/integration/test_input_format_parallel_parsing_memory_tracking/test.py Cluster start called. is_up=False Docker networks for project roottestinputformatparallelparsingmemorytracking-gw7 are NETWORK ID NAME DRIVER SCOPE Docker containers for project roottestinputformatparallelparsingmemorytracking-gw7 are CONTAINER ID IMAGE COMMAND CREATED STATUS PORTS NAMES Docker volumes for project roottestinputformatparallelparsingmemorytracking-gw7 are DRIVER VOLUME NAME Cleanup called Docker networks for project roottestinputformatparallelparsingmemorytracking-gw7 are NETWORK ID NAME DRIVER SCOPE Executing query SELECT * from file('file{1,2}.csv', 'CSV', 's String, i UInt32') ORDER BY (i, s) on s0_0_0 Docker containers for project roottestinputformatparallelparsingmemorytracking-gw7 are CONTAINER ID IMAGE COMMAND CREATED STATUS PORTS NAMES Docker volumes for project roottestinputformatparallelparsingmemorytracking-gw7 are DRIVER VOLUME NAME Command:[docker container list --all --filter name='^/roottestinputformatparallelparsingmemorytracking-gw7-.*-1$' --format '{{.ID}}:{{.Names}}'] Executing query CREATE TABLE data (key Int, value String) Engine=MergeTree() ORDER BY key on n4 Unstopped containers: {} No running containers for project: roottestinputformatparallelparsingmemorytracking-gw7 Trying to prune unused networks... Trying to prune unused images... Command:[docker image prune -f] Stdout:Total reclaimed space: 0B Images pruned Trying to prune unused volumes... Command:[docker volume ls | wc -l] Stdout:3 Command:[docker volume prune -f] Stdout:Total reclaimed space: 0B Volumes pruned: 3 Setup directory for instance: instance Create directory for configuration generated in this helper Create directory for common tests configuration Copy common configuration from helpers Generate and write macros file Copy custom test config files ['/ClickHouse/tests/integration/test_input_format_parallel_parsing_memory_tracking/configs/conf.xml', '/ClickHouse/tests/integration/test_input_format_parallel_parsing_memory_tracking/configs/asynchronous_metrics_update_period_s.xml'] to /ClickHouse/tests/integration/test_input_format_parallel_parsing_memory_tracking/_instances-0-gw7/instance/configs/config.d Setup database dir /ClickHouse/tests/integration/test_input_format_parallel_parsing_memory_tracking/_instances-0-gw7/instance/database Setup logs dir /ClickHouse/tests/integration/test_input_format_parallel_parsing_memory_tracking/_instances-0-gw7/instance/logs Entrypoint cmd: ["clickhouse", "server", "--config-file=/etc/clickhouse-server/config.xml", "--log-file=/var/log/clickhouse-server/clickhouse-server.log", "--errorlog-file=/var/log/clickhouse-server/clickhouse-server.err.log", "--"] Env {'ASAN_OPTIONS': 'use_sigaltstack=0', 'TSAN_OPTIONS': 'use_sigaltstack=0', 'CLICKHOUSE_WATCHDOG_ENABLE': '0', 'CLICKHOUSE_NATS_TLS_SECURE': '0', 'LLVM_PROFILE_FILE': '/var/lib/clickhouse/server_%h_%p_%m.profraw'} stored in /ClickHouse/tests/integration/test_input_format_parallel_parsing_memory_tracking/_instances-0-gw7/.env Trying paths: ['/root/.docker/config.json', '/root/.dockercfg'] No config file found Trying paths: ['/root/.docker/config.json', '/root/.dockercfg'] No config file found http://localhost:None "GET /version HTTP/1.1" 200 826 Command:[docker compose --env-file /ClickHouse/tests/integration/test_input_format_parallel_parsing_memory_tracking/_instances-0-gw7/.env --project-name roottestinputformatparallelparsingmemorytracking-gw7 --file /ClickHouse/tests/integration/test_input_format_parallel_parsing_memory_tracking/_instances-0-gw7/instance/docker-compose.yml pull] Executing query SELECT * FROM encrypted_test ORDER BY id FORMAT Values on node Executing query SELECT * from fileCluster('my_cluster', 'file{1,2,3}.csv', 'CSV', 's String, i UInt32') ORDER BY (i, s) on s0_0_0 Executing query CREATE TABLE dist AS data Engine=Distributed( insert_distributed_async_send_cluster_two_replicas, currentDatabase(), data, key ) on n4 Stderr: Container roottestformatschemaonserver-gw6-instance-1 Stopping Stderr: Container roottestformatschemaonserver-gw6-instance-1 Stopped Stderr: Container roottestformatschemaonserver-gw6-instance-1 Removing Stderr: Container roottestformatschemaonserver-gw6-instance-1 Removed Stderr: Network roottestformatschemaonserver-gw6_default Removing Stderr: Network roottestformatschemaonserver-gw6_default Removed Cleanup called Docker networks for project roottestformatschemaonserver-gw6 are NETWORK ID NAME DRIVER SCOPE Docker containers for project roottestformatschemaonserver-gw6 are CONTAINER ID IMAGE COMMAND CREATED STATUS PORTS NAMES Executing query INSERT INTO test.graphite2 FORMAT TSV on instance Docker volumes for project roottestformatschemaonserver-gw6 are DRIVER VOLUME NAME Command:[docker container list --all --filter name='^/roottestformatschemaonserver-gw6-.*-1$' --format '{{.ID}}:{{.Names}}'] Unstopped containers: {} No running containers for project: roottestformatschemaonserver-gw6 Trying to prune unused networks... run container_id:roottestencrypteddisk-gw1-node-1 detach:False nothrow:False cmd: ['bash', '-c', 'cat > /etc/clickhouse-server/config.d/storage_policy_encrypted_policy_multikeys.xml << EOF\n\n\n \n \n encrypted\n disk_local\n encrypted_policy_multikeys_dir/\n \n secondsecondseco\n wrongwrongwrongw\n 1\n \n \n \n \n \n \n
\n encrypted_policy_multikeys_disk\n
\n
\n
\n
\n
\n
\nEOF'] Command:[docker exec roottestencrypteddisk-gw1-node-1 bash -c cat > /etc/clickhouse-server/config.d/storage_policy_encrypted_policy_multikeys.xml << EOF encrypted disk_local encrypted_policy_multikeys_dir/ secondsecondseco wrongwrongwrongw 1
encrypted_policy_multikeys_disk
EOF] Trying to prune unused images... Command:[docker image prune -f] Stdout:Total reclaimed space: 0B Images pruned Trying to prune unused volumes... Command:[docker volume ls | wc -l] Stdout:3 Command:[docker volume prune -f] Executing query SYSTEM RELOAD CONFIG on node Executing query SYSTEM STOP DISTRIBUTED SENDS dist on n4 Stdout:Total reclaimed space: 0B Volumes pruned: 3 test_http_and_readonly/test.py::test_http_get_is_readonly Running tests in /ClickHouse/tests/integration/test_http_and_readonly/test.py Cluster start called. is_up=False Executing query SELECT * from fileCluster('my_cluster', 'file{1,2}.csv', 'CSV', 's String, i UInt32') ORDER BY (i, s) on s0_0_0 Docker networks for project roottesthttpandreadonly-gw6 are NETWORK ID NAME DRIVER SCOPE Docker containers for project roottesthttpandreadonly-gw6 are CONTAINER ID IMAGE COMMAND CREATED STATUS PORTS NAMES Docker volumes for project roottesthttpandreadonly-gw6 are DRIVER VOLUME NAME Cleanup called Docker networks for project roottesthttpandreadonly-gw6 are NETWORK ID NAME DRIVER SCOPE Docker containers for project roottesthttpandreadonly-gw6 are CONTAINER ID IMAGE COMMAND CREATED STATUS PORTS NAMES Docker volumes for project roottesthttpandreadonly-gw6 are DRIVER VOLUME NAME Command:[docker container list --all --filter name='^/roottesthttpandreadonly-gw6-.*-1$' --format '{{.ID}}:{{.Names}}'] Unstopped containers: {} No running containers for project: roottesthttpandreadonly-gw6 Trying to prune unused networks... Trying to prune unused images... Command:[docker image prune -f] Stdout:Total reclaimed space: 0B Images pruned Trying to prune unused volumes... Command:[docker volume ls | wc -l] Stdout:3 Command:[docker volume prune -f] Stdout:Total reclaimed space: 0B Volumes pruned: 3 Setup directory for instance: instance Create directory for configuration generated in this helper Create directory for common tests configuration Copy common configuration from helpers Generate and write macros file Copy custom test config files [] to /ClickHouse/tests/integration/test_http_and_readonly/_instances-0-gw6/instance/configs/config.d Setup database dir /ClickHouse/tests/integration/test_http_and_readonly/_instances-0-gw6/instance/database Setup logs dir /ClickHouse/tests/integration/test_http_and_readonly/_instances-0-gw6/instance/logs Entrypoint cmd: ["clickhouse", "server", "--config-file=/etc/clickhouse-server/config.xml", "--log-file=/var/log/clickhouse-server/clickhouse-server.log", "--errorlog-file=/var/log/clickhouse-server/clickhouse-server.err.log", "--"] Env {'ASAN_OPTIONS': 'use_sigaltstack=0', 'TSAN_OPTIONS': 'use_sigaltstack=0', 'CLICKHOUSE_WATCHDOG_ENABLE': '0', 'CLICKHOUSE_NATS_TLS_SECURE': '0', 'LLVM_PROFILE_FILE': '/var/lib/clickhouse/server_%h_%p_%m.profraw'} stored in /ClickHouse/tests/integration/test_http_and_readonly/_instances-0-gw6/.env Trying paths: ['/root/.docker/config.json', '/root/.dockercfg'] No config file found Trying paths: ['/root/.docker/config.json', '/root/.dockercfg'] No config file found http://localhost:None "GET /version HTTP/1.1" 200 826 Executing query INSERT INTO dist SELECT number, randomPrintableASCII(100) FROM numbers(10000) on n1 Command:[docker compose --env-file /ClickHouse/tests/integration/test_http_and_readonly/_instances-0-gw6/.env --project-name roottesthttpandreadonly-gw6 --file /ClickHouse/tests/integration/test_http_and_readonly/_instances-0-gw6/instance/docker-compose.yml pull] Executing query SELECT * FROM encrypted_test ORDER BY id FORMAT Values on node [gw2] PASSED test_file_cluster/test.py::test_missing_file test_file_cluster/test.py::test_no_such_files Executing query SELECT * from file('file{3,4}.csv', 'CSV', 's String, i UInt32') ORDER BY (i, s) on s0_0_0 Executing query ALTER TABLE encrypted_test DETACH PART 'all_1_1_0' on node run container_id:roottestinsertdistributedasyncsend-gw0-n1-1 detach:False nothrow:False cmd: ['bash', '-c', 'wc -c < /var/lib/clickhouse/data/default/dist/shard1_replica2/2.bin'] Command:[docker exec roottestinsertdistributedasyncsend-gw0-n1-1 bash -c wc -c < /var/lib/clickhouse/data/default/dist/shard1_replica2/2.bin] Executing query INSERT INTO test.graphite2 FORMAT TSV on instance Executing query SELECT * from fileCluster('my_cluster', 'file{3,4}.csv', 'CSV', 's String, i UInt32') ORDER BY (i, s) on s0_0_0 Stdout:1054688 run container_id:roottestinsertdistributedasyncsend-gw0-n1-1 detach:False nothrow:False cmd: ['bash', '-c', 'mv /var/lib/clickhouse/data/default/dist/shard1_replica2/2.bin /tmp/bin && head -c 1046496 /tmp/bin > /var/lib/clickhouse/data/default/dist/shard1_replica2/2.bin && head -c 8192 /dev/zero >> /var/lib/clickhouse/data/default/dist/shard1_replica2/2.bin'] Command:[docker exec roottestinsertdistributedasyncsend-gw0-n1-1 bash -c mv /var/lib/clickhouse/data/default/dist/shard1_replica2/2.bin /tmp/bin && head -c 1046496 /tmp/bin > /var/lib/clickhouse/data/default/dist/shard1_replica2/2.bin && head -c 8192 /dev/zero >> /var/lib/clickhouse/data/default/dist/shard1_replica2/2.bin] Executing query SYSTEM FLUSH DISTRIBUTED dist on n1 Executing query SELECT * FROM encrypted_test ORDER BY id FORMAT Values on node [gw2] PASSED test_file_cluster/test.py::test_no_such_files test_file_cluster/test.py::test_non_existent_cluster Executing query SELECT count(*) from fileCluster( 'non_existent_cluster', 'file{1,2}.csv', 'CSV', 's String, i UInt32') UNION ALL SELECT count(*) from fileCluster( 'non_existent_cluster', 'file{1,2}.csv', 'CSV', 's String, i UInt32') on s0_0_0 Executing query SYSTEM FLUSH DISTRIBUTED dist on n1 Executing query DROP TABLE IF EXISTS encrypted_test SYNC on node [gw1] PASSED test_encrypted_disk/test.py::test_add_keys_with_id Executing query select * from file('file*.csv', 'CSV', 's String, i UInt32') ORDER BY (i, s) on s0_0_0 [gw2] PASSED test_file_cluster/test.py::test_non_existent_cluster test_file_cluster/test.py::test_schema_inference run container_id:roottestinsertdistributedasyncsend-gw0-n1-1 detach:False nothrow:False cmd: ['bash', '-c', 'ls /var/lib/clickhouse/data/default/dist/shard1_replica2/broken/2.bin'] Command:[docker exec roottestinsertdistributedasyncsend-gw0-n1-1 bash -c ls /var/lib/clickhouse/data/default/dist/shard1_replica2/broken/2.bin] test_encrypted_disk/test.py::test_backup_restore[File-encrypted_policy-local_policy-False] Executing query CREATE TABLE encrypted_test ( id Int64, data String ) ENGINE=MergeTree() ORDER BY id SETTINGS storage_policy='encrypted_policy' on node Executing query INSERT INTO test.graphite2 FORMAT TSV on instance Stdout:/var/lib/clickhouse/data/default/dist/shard1_replica2/broken/2.bin Executing query SELECT count() FROM data on n1 Executing query select * from fileCluster('my_cluster', 'file*.csv') ORDER BY (c1, c2) on s0_0_0 Executing query SELECT value FROM system.events WHERE event='HedgedRequestsChangeReplica' on node Executing query INSERT INTO encrypted_test VALUES (0,'data'),(1,'data') on node Executing query SELECT count() FROM data on n2 [gw3] PASSED test_hedged_requests_parallel/test.py::test_combination1 test_hedged_requests_parallel/test.py::test_combination2 Executing query SELECT value FROM system.build_options WHERE name = 'CXX_FLAGS' on node Executing query select * from fileCluster('my_cluster', 'file*.csv', auto) ORDER BY (c1, c2) on s0_0_0 Executing query SELECT * FROM encrypted_test ORDER BY id FORMAT Values on node run container_id:roottesthedgedrequestsparallel-gw3-node_1-1 detach:False nothrow:False cmd: ['bash', '-c', "echo '\n \n \n 0\n 30000\n \n \n' > /etc/clickhouse-server/users.d/users1.xml"] Command:[docker exec roottesthedgedrequestsparallel-gw3-node_1-1 bash -c echo ' 0 30000 ' > /etc/clickhouse-server/users.d/users1.xml] Executing query select * from fileCluster('my_cluster', 'file*.csv', CSV) ORDER BY (c1, c2) on s0_0_0 [gw0] PASSED test_insert_distributed_async_send/test.py::test_insert_distributed_async_send_corrupted_big[1] Executing query DROP TABLE IF EXISTS data on n1 test_insert_distributed_async_send/test.py::test_insert_distributed_async_send_corrupted_small[0] run container_id:roottesthedgedrequestsparallel-gw3-node_2-1 detach:False nothrow:False cmd: ['bash', '-c', "echo '\n \n \n 1000\n 0\n \n \n' > /etc/clickhouse-server/users.d/users1.xml"] Command:[docker exec roottesthedgedrequestsparallel-gw3-node_2-1 bash -c echo ' 1000 0 ' > /etc/clickhouse-server/users.d/users1.xml] Executing query INSERT INTO test.graphite2 FORMAT TSV on instance run container_id:roottesthedgedrequestsparallel-gw3-node_3-1 detach:False nothrow:False cmd: ['bash', '-c', "echo '\n \n \n 0\n 30000\n \n \n' > /etc/clickhouse-server/users.d/users1.xml"] Command:[docker exec roottesthedgedrequestsparallel-gw3-node_3-1 bash -c echo ' 0 30000 ' > /etc/clickhouse-server/users.d/users1.xml] Executing query BACKUP TABLE encrypted_test TO File('/backups/backup1/') SETTINGS decrypt_files_from_encrypted_disks=0 on node run container_id:roottesthedgedrequestsparallel-gw3-node_4-1 detach:False nothrow:False cmd: ['bash', '-c', "echo '\n \n \n 1000\n 0\n \n \n' > /etc/clickhouse-server/users.d/users1.xml"] Command:[docker exec roottesthedgedrequestsparallel-gw3-node_4-1 bash -c echo ' 1000 0 ' > /etc/clickhouse-server/users.d/users1.xml] Executing query SELECT value FROM system.settings WHERE name='sleep_in_send_tables_status_ms' on node_1 via HTTP interface Starting new HTTP connection (1): 172.16.4.3:8123 Executing query DROP TABLE IF EXISTS dist on n1 http://172.16.4.3:8123 "GET /?query=SELECT+value+FROM+system.settings+WHERE+name%3D%27sleep_in_send_tables_status_ms%27 HTTP/1.1" 200 None Executing query SELECT value FROM system.settings WHERE name='sleep_in_send_data_ms' on node_1 via HTTP interface Starting new HTTP connection (1): 172.16.4.3:8123 http://172.16.4.3:8123 "GET /?query=SELECT+value+FROM+system.settings+WHERE+name%3D%27sleep_in_send_data_ms%27 HTTP/1.1" 200 None Executing query select * from fileCluster('my_cluster', 'file*.csv', auto, auto) ORDER BY (c1, c2) on s0_0_0 Executing query DROP TABLE encrypted_test SYNC on node Executing query SELECT value FROM system.settings WHERE name='sleep_in_send_tables_status_ms' on node_1 via HTTP interface Starting new HTTP connection (1): 172.16.4.3:8123 http://172.16.4.3:8123 "GET /?query=SELECT+value+FROM+system.settings+WHERE+name%3D%27sleep_in_send_tables_status_ms%27 HTTP/1.1" 200 None Executing query SELECT value FROM system.settings WHERE name='sleep_in_send_data_ms' on node_1 via HTTP interface Starting new HTTP connection (1): 172.16.4.3:8123 Executing query DROP TABLE IF EXISTS data on n2 http://172.16.4.3:8123 "GET /?query=SELECT+value+FROM+system.settings+WHERE+name%3D%27sleep_in_send_data_ms%27 HTTP/1.1" 200 None Executing query SELECT value FROM system.settings WHERE name='sleep_in_send_tables_status_ms' on node_1 via HTTP interface Starting new HTTP connection (1): 172.16.4.3:8123 http://172.16.4.3:8123 "GET /?query=SELECT+value+FROM+system.settings+WHERE+name%3D%27sleep_in_send_tables_status_ms%27 HTTP/1.1" 200 None Executing query SELECT value FROM system.settings WHERE name='sleep_in_send_data_ms' on node_1 via HTTP interface Starting new HTTP connection (1): 172.16.4.3:8123 Executing query select * from fileCluster('my_cluster', 'file*.csv', CSV, auto) ORDER BY (c1, c2) on s0_0_0 http://172.16.4.3:8123 "GET /?query=SELECT+value+FROM+system.settings+WHERE+name%3D%27sleep_in_send_data_ms%27 HTTP/1.1" 200 None Executing query CREATE TABLE encrypted_test ( id Int64, data String ) ENGINE=MergeTree() ORDER BY id SETTINGS storage_policy='local_policy' on node Executing query DROP TABLE IF EXISTS dist on n2 Executing query SELECT value FROM system.settings WHERE name='sleep_in_send_tables_status_ms' on node_1 via HTTP interface Starting new HTTP connection (1): 172.16.4.3:8123 http://172.16.4.3:8123 "GET /?query=SELECT+value+FROM+system.settings+WHERE+name%3D%27sleep_in_send_tables_status_ms%27 HTTP/1.1" 200 None Executing query SELECT value FROM system.settings WHERE name='sleep_in_send_data_ms' on node_1 via HTTP interface Starting new HTTP connection (1): 172.16.4.3:8123 http://172.16.4.3:8123 "GET /?query=SELECT+value+FROM+system.settings+WHERE+name%3D%27sleep_in_send_data_ms%27 HTTP/1.1" 200 None Executing query OPTIMIZE TABLE test.graphite2 PARTITION 201801 FINAL on instance Executing query RESTORE TABLE encrypted_test FROM File('/backups/backup1/') SETTINGS allow_different_table_def=1 on node Executing query SELECT value FROM system.settings WHERE name='sleep_in_send_tables_status_ms' on node_1 via HTTP interface Starting new HTTP connection (1): 172.16.4.3:8123 Executing query select * from fileCluster('my_cluster', 'file*.csv', auto, auto, auto) ORDER BY (c1, c2) on s0_0_0 http://172.16.4.3:8123 "GET /?query=SELECT+value+FROM+system.settings+WHERE+name%3D%27sleep_in_send_tables_status_ms%27 HTTP/1.1" 200 None Executing query SELECT value FROM system.settings WHERE name='sleep_in_send_data_ms' on node_1 via HTTP interface Starting new HTTP connection (1): 172.16.4.3:8123 Executing query DROP TABLE IF EXISTS data on n3 http://172.16.4.3:8123 "GET /?query=SELECT+value+FROM+system.settings+WHERE+name%3D%27sleep_in_send_data_ms%27 HTTP/1.1" 200 None Executing query SELECT value FROM system.settings WHERE name='sleep_in_send_tables_status_ms' on node_1 via HTTP interface Starting new HTTP connection (1): 172.16.4.3:8123 http://172.16.4.3:8123 "GET /?query=SELECT+value+FROM+system.settings+WHERE+name%3D%27sleep_in_send_tables_status_ms%27 HTTP/1.1" 200 None Executing query SELECT value FROM system.settings WHERE name='sleep_in_send_data_ms' on node_1 via HTTP interface Starting new HTTP connection (1): 172.16.4.3:8123 http://172.16.4.3:8123 "GET /?query=SELECT+value+FROM+system.settings+WHERE+name%3D%27sleep_in_send_data_ms%27 HTTP/1.1" 200 None Executing query DROP TABLE IF EXISTS dist on n3 Executing query select * from fileCluster('my_cluster', 'file*.csv', CSV, auto, auto) ORDER BY (c1, c2) on s0_0_0 Executing query SELECT value FROM system.settings WHERE name='sleep_in_send_tables_status_ms' on node_1 via HTTP interface Starting new HTTP connection (1): 172.16.4.3:8123 http://172.16.4.3:8123 "GET /?query=SELECT+value+FROM+system.settings+WHERE+name%3D%27sleep_in_send_tables_status_ms%27 HTTP/1.1" 200 None Executing query DROP TABLE IF EXISTS encrypted_test SYNC on node [gw1] PASSED test_encrypted_disk/test.py::test_backup_restore[File-encrypted_policy-local_policy-False] Executing query SELECT value FROM system.settings WHERE name='sleep_in_send_data_ms' on node_1 via HTTP interface Starting new HTTP connection (1): 172.16.4.3:8123 http://172.16.4.3:8123 "GET /?query=SELECT+value+FROM+system.settings+WHERE+name%3D%27sleep_in_send_data_ms%27 HTTP/1.1" 200 None Executing query DROP TABLE IF EXISTS data on n4 Executing query SELECT value FROM system.settings WHERE name='sleep_in_send_tables_status_ms' on node_1 via HTTP interface Starting new HTTP connection (1): 172.16.4.3:8123 http://172.16.4.3:8123 "GET /?query=SELECT+value+FROM+system.settings+WHERE+name%3D%27sleep_in_send_tables_status_ms%27 HTTP/1.1" 200 None Executing query SELECT value FROM system.settings WHERE name='sleep_in_send_data_ms' on node_1 via HTTP interface Starting new HTTP connection (1): 172.16.4.3:8123 Executing query SELECT count() FROM system.parts WHERE active AND database='test' AND table='graphite2' on instance http://172.16.4.3:8123 "GET /?query=SELECT+value+FROM+system.settings+WHERE+name%3D%27sleep_in_send_data_ms%27 HTTP/1.1" 200 None Executing query SELECT value FROM system.settings WHERE name='sleep_in_send_tables_status_ms' on node_2 via HTTP interface Starting new HTTP connection (1): 172.16.4.5:8123 http://172.16.4.5:8123 "GET /?query=SELECT+value+FROM+system.settings+WHERE+name%3D%27sleep_in_send_tables_status_ms%27 HTTP/1.1" 200 None Executing query SELECT value FROM system.settings WHERE name='sleep_in_send_data_ms' on node_2 via HTTP interface Starting new HTTP connection (1): 172.16.4.5:8123 http://172.16.4.5:8123 "GET /?query=SELECT+value+FROM+system.settings+WHERE+name%3D%27sleep_in_send_data_ms%27 HTTP/1.1" 200 None Executing query SELECT value FROM system.settings WHERE name='sleep_in_send_tables_status_ms' on node_3 via HTTP interface Starting new HTTP connection (1): 172.16.4.4:8123 http://172.16.4.4:8123 "GET /?query=SELECT+value+FROM+system.settings+WHERE+name%3D%27sleep_in_send_tables_status_ms%27 HTTP/1.1" 200 None Executing query SELECT value FROM system.settings WHERE name='sleep_in_send_data_ms' on node_3 via HTTP interface Starting new HTTP connection (1): 172.16.4.4:8123 test_encrypted_disk/test.py::test_backup_restore[File-encrypted_policy-local_policy-True] Executing query CREATE TABLE encrypted_test ( id Int64, data String ) ENGINE=MergeTree() ORDER BY id SETTINGS storage_policy='encrypted_policy' on node [gw2] PASSED test_file_cluster/test.py::test_schema_inference test_file_cluster/test.py::test_select_all Executing query SELECT * from file('file{1,2}.csv', 'CSV', 's String, i UInt32') ORDER BY (i, s) on s0_0_0 http://172.16.4.4:8123 "GET /?query=SELECT+value+FROM+system.settings+WHERE+name%3D%27sleep_in_send_data_ms%27 HTTP/1.1" 200 None Executing query SELECT value FROM system.settings WHERE name='sleep_in_send_tables_status_ms' on node_4 via HTTP interface Starting new HTTP connection (1): 172.16.4.2:8123 http://172.16.4.2:8123 "GET /?query=SELECT+value+FROM+system.settings+WHERE+name%3D%27sleep_in_send_tables_status_ms%27 HTTP/1.1" 200 None Executing query SELECT value FROM system.settings WHERE name='sleep_in_send_data_ms' on node_4 via HTTP interface Starting new HTTP connection (1): 172.16.4.2:8123 http://172.16.4.2:8123 "GET /?query=SELECT+value+FROM+system.settings+WHERE+name%3D%27sleep_in_send_data_ms%27 HTTP/1.1" 200 None run container_id:roottesthedgedrequestsparallel-gw3-node-1 detach:False nothrow:True cmd: ['bash', '-c', 'ps -C clickhouse'] Command:[docker exec -u root roottesthedgedrequestsparallel-gw3-node-1 bash -c ps -C clickhouse] Stdout: PID TTY TIME CMD Executing query DROP TABLE IF EXISTS dist on n4 Stdout: 751 ? 00:00:02 clickhouse run container_id:roottesthedgedrequestsparallel-gw3-node-1 detach:False nothrow:False cmd: ['bash', '-c', 'pkill clickhouse'] Command:[docker exec -u root roottesthedgedrequestsparallel-gw3-node-1 bash -c pkill clickhouse] run container_id:roottesthedgedrequestsparallel-gw3-node-1 detach:False nothrow:False cmd: ['bash', '-c', "ps ax | grep 'clickhouse' | grep -v 'grep' | grep -v 'coproc' | grep -v 'bash -c' | awk '{print $1}'"] Command:[docker exec roottesthedgedrequestsparallel-gw3-node-1 bash -c ps ax | grep 'clickhouse' | grep -v 'grep' | grep -v 'coproc' | grep -v 'bash -c' | awk '{print $1}'] Executing query INSERT INTO encrypted_test VALUES (0,'data'),(1,'data') on node Stdout:751 Executing query SELECT value, timestamp, date, updated FROM test.graphite2 on instance Executing query SELECT * from fileCluster('my_cluster', 'file{1,2}.csv', 'CSV', 's String, i UInt32') ORDER BY (i, s) on s0_0_0 Executing query CREATE TABLE data (key Int, value String) Engine=MergeTree() ORDER BY key on n1 Executing query SELECT * FROM encrypted_test ORDER BY id FORMAT Values on node Executing query DROP TABLE test.graphite2 on instance Command:[docker compose --env-file /ClickHouse/tests/integration/test_file_cluster/_instances-0-gw2/.env --project-name roottestfilecluster-gw2 --file /ClickHouse/tests/integration/test_file_cluster/_instances-0-gw2/s0_0_0/docker-compose.yml --file /ClickHouse/tests/integration/helpers/../../../tests/integration/compose/docker_compose_keeper.yml --file /ClickHouse/tests/integration/test_file_cluster/_instances-0-gw2/s0_0_1/docker-compose.yml --file /ClickHouse/tests/integration/test_file_cluster/_instances-0-gw2/s0_1_0/docker-compose.yml stop --timeout 20] [gw2] PASSED test_file_cluster/test.py::test_select_all Executing query CREATE TABLE dist AS data Engine=Distributed( insert_distributed_async_send_cluster_two_replicas, currentDatabase(), data, key ) on n1 Executing query BACKUP TABLE encrypted_test TO File('/backups/backup2/') SETTINGS decrypt_files_from_encrypted_disks=1 on node Executing query DROP TABLE test.graphite on instance [gw5] PASSED test_graphite_merge_tree/test.py::test_path_dangling_pointer Executing query SYSTEM STOP DISTRIBUTED SENDS dist on n1 test_graphite_merge_tree/test.py::test_paths_not_matching_any_pattern Executing query DROP TABLE IF EXISTS test.graphite; CREATE TABLE test.graphite (metric String, value Float64, timestamp UInt32, date Date, updated UInt32) ENGINE = GraphiteMergeTree('graphite_rollup') PARTITION BY toYYYYMM(date) ORDER BY (metric, timestamp) SETTINGS index_granularity=8192; on instance Executing query DROP TABLE encrypted_test SYNC on node Executing query CREATE TABLE data (key Int, value String) Engine=MergeTree() ORDER BY key on n2 Executing query CREATE TABLE encrypted_test ( id Int64, data String ) ENGINE=MergeTree() ORDER BY id SETTINGS storage_policy='local_policy' on node Executing query INSERT INTO test.graphite FORMAT TSV on instance Executing query CREATE TABLE dist AS data Engine=Distributed( insert_distributed_async_send_cluster_two_replicas, currentDatabase(), data, key ) on n2 run container_id:roottesthedgedrequestsparallel-gw3-node-1 detach:False nothrow:False cmd: ['bash', '-c', "ps ax | grep 'clickhouse' | grep -v 'grep' | grep -v 'coproc' | grep -v 'bash -c' | awk '{print $1}'"] Command:[docker exec roottesthedgedrequestsparallel-gw3-node-1 bash -c ps ax | grep 'clickhouse' | grep -v 'grep' | grep -v 'coproc' | grep -v 'bash -c' | awk '{print $1}'] Executing query RESTORE TABLE encrypted_test FROM File('/backups/backup2/') SETTINGS allow_different_table_def=1 on node Stdout:751 Executing query SYSTEM STOP DISTRIBUTED SENDS dist on n2 Executing query OPTIMIZE TABLE test.graphite PARTITION 200109 FINAL; SELECT * FROM test.graphite; on instance Executing query SELECT * FROM encrypted_test ORDER BY id FORMAT Values on node Executing query CREATE TABLE data (key Int, value String) Engine=MergeTree() ORDER BY key on n3 Executing query DROP TABLE test.graphite on instance [gw5] PASSED test_graphite_merge_tree/test.py::test_paths_not_matching_any_pattern Executing query DROP TABLE IF EXISTS encrypted_test SYNC on node [gw1] PASSED test_encrypted_disk/test.py::test_backup_restore[File-encrypted_policy-local_policy-True] Executing query CREATE TABLE dist AS data Engine=Distributed( insert_distributed_async_send_cluster_two_replicas, currentDatabase(), data, key ) on n3 test_graphite_merge_tree/test.py::test_rollup_aggregation Executing query DROP TABLE IF EXISTS test.graphite; CREATE TABLE test.graphite (metric String, value Float64, timestamp UInt32, date Date, updated UInt32) ENGINE = GraphiteMergeTree('graphite_rollup') PARTITION BY toYYYYMM(date) ORDER BY (metric, timestamp) SETTINGS index_granularity=8192; on instance test_encrypted_disk/test.py::test_backup_restore[File-local_policy-encrypted_policy-False] Executing query CREATE TABLE encrypted_test ( id Int64, data String ) ENGINE=MergeTree() ORDER BY id SETTINGS storage_policy='local_policy' on node Executing query SYSTEM STOP DISTRIBUTED SENDS dist on n3 Executing query SELECT avg(v), max(upd) FROM (SELECT timestamp, argMax(value, (updated, number)) AS v, max(updated) AS upd FROM (SELECT 'one_min.x5' AS metric, toFloat64(number) AS value, toUInt32(1111111111 + intDiv(number, 3)) AS timestamp, toDate('2017-02-02') AS date, toUInt32(intDiv(number, 2)) AS updated, number FROM system.numbers LIMIT 1000000) WHERE intDiv(timestamp, 600) * 600 = 1111444200 GROUP BY timestamp) on instance Executing query CREATE TABLE data (key Int, value String) Engine=MergeTree() ORDER BY key on n4 Executing query INSERT INTO encrypted_test VALUES (0,'data'),(1,'data') on node run container_id:roottesthedgedrequestsparallel-gw3-node-1 detach:False nothrow:False cmd: ['bash', '-c', "ps ax | grep 'clickhouse' | grep -v 'grep' | grep -v 'coproc' | grep -v 'bash -c' | awk '{print $1}'"] Command:[docker exec roottesthedgedrequestsparallel-gw3-node-1 bash -c ps ax | grep 'clickhouse' | grep -v 'grep' | grep -v 'coproc' | grep -v 'bash -c' | awk '{print $1}'] run container_id:roottesthedgedrequestsparallel-gw3-node-1 detach:False nothrow:False cmd: ['bash', '-c', "ps ax | grep 'clickhouse' | grep -v 'grep' | grep -v 'coproc' | grep -v 'bash -c' | awk '{print $1}'"] Command:[docker exec roottesthedgedrequestsparallel-gw3-node-1 bash -c ps ax | grep 'clickhouse' | grep -v 'grep' | grep -v 'coproc' | grep -v 'bash -c' | awk '{print $1}'] Executing query CREATE TABLE dist AS data Engine=Distributed( insert_distributed_async_send_cluster_two_replicas, currentDatabase(), data, key ) on n4 Executing query SELECT * FROM encrypted_test ORDER BY id FORMAT Values on node No clickhouse process running. Start new one. http://localhost:None "POST /v1.46/containers/roottesthedgedrequestsparallel-gw3-node-1/exec HTTP/1.1" 201 74 Executing query INSERT INTO test.graphite SELECT 'one_min.x' AS metric, toFloat64(number) AS value, toUInt32(1111111111 + intDiv(number, 3)) AS timestamp, toDate('2017-02-02') AS date, toUInt32(intDiv(number, 2)) AS updated FROM (SELECT * FROM system.numbers LIMIT 1000000) WHERE intDiv(timestamp, 600) * 600 = 1111444200; OPTIMIZE TABLE test.graphite PARTITION 201702 FINAL; SELECT * FROM test.graphite; on instance http://localhost:None "POST /v1.46/exec/5f639d98e1e4b8382b716c5f7e09536c5c8f002ba37a2b42628e49f0c3a2b717/start HTTP/1.1" 200 0 http://localhost:None "GET /v1.46/exec/5f639d98e1e4b8382b716c5f7e09536c5c8f002ba37a2b42628e49f0c3a2b717/json HTTP/1.1" 200 586 Executing query SYSTEM STOP DISTRIBUTED SENDS dist on n4 Executing query BACKUP TABLE encrypted_test TO File('/backups/backup3/') SETTINGS decrypt_files_from_encrypted_disks=0 on node Executing query DROP TABLE test.graphite on instance [gw5] PASSED test_graphite_merge_tree/test.py::test_rollup_aggregation Executing query INSERT INTO dist SELECT number, randomPrintableASCII(100) FROM numbers(10000) on n2 Executing query DROP TABLE encrypted_test SYNC on node test_graphite_merge_tree/test.py::test_rollup_aggregation_2 Executing query DROP TABLE IF EXISTS test.graphite; CREATE TABLE test.graphite (metric String, value Float64, timestamp UInt32, date Date, updated UInt32) ENGINE = GraphiteMergeTree('graphite_rollup') PARTITION BY toYYYYMM(date) ORDER BY (metric, timestamp) SETTINGS index_granularity=8192; on instance Executing query CREATE TABLE encrypted_test ( id Int64, data String ) ENGINE=MergeTree() ORDER BY id SETTINGS storage_policy='encrypted_policy' on node run container_id:roottestinsertdistributedasyncsend-gw0-n2-1 detach:False nothrow:False cmd: ['bash', '-c', 'wc -c < /var/lib/clickhouse/data/default/dist/shard1_replica2/2.bin'] Command:[docker exec roottestinsertdistributedasyncsend-gw0-n2-1 bash -c wc -c < /var/lib/clickhouse/data/default/dist/shard1_replica2/2.bin] Stdout:1054718 run container_id:roottestinsertdistributedasyncsend-gw0-n2-1 detach:False nothrow:False cmd: ['bash', '-c', 'mv /var/lib/clickhouse/data/default/dist/shard1_replica2/2.bin /tmp/bin && head -c 1054658 /tmp/bin > /var/lib/clickhouse/data/default/dist/shard1_replica2/2.bin && head -c 60 /dev/zero >> /var/lib/clickhouse/data/default/dist/shard1_replica2/2.bin'] Command:[docker exec roottestinsertdistributedasyncsend-gw0-n2-1 bash -c mv /var/lib/clickhouse/data/default/dist/shard1_replica2/2.bin /tmp/bin && head -c 1054658 /tmp/bin > /var/lib/clickhouse/data/default/dist/shard1_replica2/2.bin && head -c 60 /dev/zero >> /var/lib/clickhouse/data/default/dist/shard1_replica2/2.bin] Executing query SYSTEM FLUSH DISTRIBUTED dist on n2 Executing query INSERT INTO test.graphite SELECT 'one_min.x' AS metric, toFloat64(number) AS value, toUInt32(1111111111 - intDiv(number, 3)) AS timestamp, toDate('2017-02-02') AS date, toUInt32(100 - number) AS updated FROM (SELECT * FROM system.numbers LIMIT 50); OPTIMIZE TABLE test.graphite PARTITION 201702 FINAL; SELECT * FROM test.graphite; on instance Executing query RESTORE TABLE encrypted_test FROM File('/backups/backup3/') SETTINGS allow_different_table_def=1 on node Executing query SYSTEM FLUSH DISTRIBUTED dist on n2 Executing query SELECT * FROM encrypted_test ORDER BY id FORMAT Values on node Executing query DROP TABLE test.graphite on instance [gw5] PASSED test_graphite_merge_tree/test.py::test_rollup_aggregation_2 run container_id:roottesthedgedrequestsparallel-gw3-node-1 detach:False nothrow:False cmd: ['bash', '-c', "ps ax | grep 'clickhouse' | grep -v 'grep' | grep -v 'coproc' | grep -v 'bash -c' | awk '{print $1}'"] Command:[docker exec roottesthedgedrequestsparallel-gw3-node-1 bash -c ps ax | grep 'clickhouse' | grep -v 'grep' | grep -v 'coproc' | grep -v 'bash -c' | awk '{print $1}'] Stdout:1511 Clickhouse process running. run container_id:roottesthedgedrequestsparallel-gw3-node-1 detach:False nothrow:False cmd: ['bash', '-c', "ps ax | grep 'clickhouse' | grep -v 'grep' | grep -v 'coproc' | grep -v 'bash -c' | awk '{print $1}'"] Command:[docker exec roottesthedgedrequestsparallel-gw3-node-1 bash -c ps ax | grep 'clickhouse' | grep -v 'grep' | grep -v 'coproc' | grep -v 'bash -c' | awk '{print $1}'] Stdout:1511 Executing query select 20 on node run container_id:roottestinsertdistributedasyncsend-gw0-n2-1 detach:False nothrow:False cmd: ['bash', '-c', 'ls /var/lib/clickhouse/data/default/dist/shard1_replica2/broken/2.bin'] Command:[docker exec roottestinsertdistributedasyncsend-gw0-n2-1 bash -c ls /var/lib/clickhouse/data/default/dist/shard1_replica2/broken/2.bin] Stdout:/var/lib/clickhouse/data/default/dist/shard1_replica2/broken/2.bin Executing query SELECT count() FROM data on n1 Executing query DROP TABLE IF EXISTS encrypted_test SYNC on node [gw1] PASSED test_encrypted_disk/test.py::test_backup_restore[File-local_policy-encrypted_policy-False] test_graphite_merge_tree/test.py::test_rollup_versions Executing query DROP TABLE IF EXISTS test.graphite; CREATE TABLE test.graphite (metric String, value Float64, timestamp UInt32, date Date, updated UInt32) ENGINE = GraphiteMergeTree('graphite_rollup') PARTITION BY toYYYYMM(date) ORDER BY (metric, timestamp) SETTINGS index_granularity=8192; on instance Stderr: instance Pulling Stderr: instance Pulled ('Trying to create ClickHouse instance by command %s', 'docker compose --env-file /ClickHouse/tests/integration/test_input_format_parallel_parsing_memory_tracking/_instances-0-gw7/.env --project-name roottestinputformatparallelparsingmemorytracking-gw7 --file /ClickHouse/tests/integration/test_input_format_parallel_parsing_memory_tracking/_instances-0-gw7/instance/docker-compose.yml up -d --no-recreate') Command:[docker compose --env-file /ClickHouse/tests/integration/test_input_format_parallel_parsing_memory_tracking/_instances-0-gw7/.env --project-name roottestinputformatparallelparsingmemorytracking-gw7 --file /ClickHouse/tests/integration/test_input_format_parallel_parsing_memory_tracking/_instances-0-gw7/instance/docker-compose.yml up -d --no-recreate] Executing query SELECT count() FROM data on n2 Stderr: node2 Skipped - Image is already being pulled by zoo2 Stderr: node1 Skipped - Image is already being pulled by zoo2 Stderr: zoo3 Skipped - Image is already being pulled by zoo2 Stderr: zoo1 Skipped - Image is already being pulled by zoo2 Stderr: zoo2 Pulling Stderr: zoo2 Pulled Setup ZooKeeper Creating internal ZooKeeper dirs: ['/ClickHouse/tests/integration/test_jbod_ha/_instances-0-gw4/keeper1/log', '/ClickHouse/tests/integration/test_jbod_ha/_instances-0-gw4/keeper1/config', '/ClickHouse/tests/integration/test_jbod_ha/_instances-0-gw4/keeper1/coordination', '/ClickHouse/tests/integration/test_jbod_ha/_instances-0-gw4/keeper2/log', '/ClickHouse/tests/integration/test_jbod_ha/_instances-0-gw4/keeper2/config', '/ClickHouse/tests/integration/test_jbod_ha/_instances-0-gw4/keeper2/coordination', '/ClickHouse/tests/integration/test_jbod_ha/_instances-0-gw4/keeper3/log', '/ClickHouse/tests/integration/test_jbod_ha/_instances-0-gw4/keeper3/config', '/ClickHouse/tests/integration/test_jbod_ha/_instances-0-gw4/keeper3/coordination'] Stderr: instance Pulling Stderr: instance Pulled ('Trying to create ClickHouse instance by command %s', 'docker compose --env-file /ClickHouse/tests/integration/test_http_and_readonly/_instances-0-gw6/.env --project-name roottesthttpandreadonly-gw6 --file /ClickHouse/tests/integration/test_http_and_readonly/_instances-0-gw6/instance/docker-compose.yml up -d --no-recreate') Command:[docker compose --env-file /ClickHouse/tests/integration/test_http_and_readonly/_instances-0-gw6/.env --project-name roottesthttpandreadonly-gw6 --file /ClickHouse/tests/integration/test_http_and_readonly/_instances-0-gw6/instance/docker-compose.yml up -d --no-recreate] Command:[docker compose --project-name roottestjbodha-gw4 --env-file /ClickHouse/tests/integration/test_jbod_ha/_instances-0-gw4/.env --file /ClickHouse/tests/integration/helpers/../../../tests/integration/compose/docker_compose_keeper.yml --verbose up -d] Stderr: node Skipped - Image is already being pulled by zoo3 Stderr: zoo1 Skipped - Image is already being pulled by zoo3 Stderr: zoo2 Skipped - Image is already being pulled by zoo3 Stderr: zoo3 Pulling Stderr: zoo3 Pulled Setup ZooKeeper Creating internal ZooKeeper dirs: ['/ClickHouse/tests/integration/test_fetch_partition_from_auxiliary_zookeeper/_instances-0-gw9/keeper1/log', '/ClickHouse/tests/integration/test_fetch_partition_from_auxiliary_zookeeper/_instances-0-gw9/keeper1/config', '/ClickHouse/tests/integration/test_fetch_partition_from_auxiliary_zookeeper/_instances-0-gw9/keeper1/coordination', '/ClickHouse/tests/integration/test_fetch_partition_from_auxiliary_zookeeper/_instances-0-gw9/keeper2/log', '/ClickHouse/tests/integration/test_fetch_partition_from_auxiliary_zookeeper/_instances-0-gw9/keeper2/config', '/ClickHouse/tests/integration/test_fetch_partition_from_auxiliary_zookeeper/_instances-0-gw9/keeper2/coordination', '/ClickHouse/tests/integration/test_fetch_partition_from_auxiliary_zookeeper/_instances-0-gw9/keeper3/log', '/ClickHouse/tests/integration/test_fetch_partition_from_auxiliary_zookeeper/_instances-0-gw9/keeper3/config', '/ClickHouse/tests/integration/test_fetch_partition_from_auxiliary_zookeeper/_instances-0-gw9/keeper3/coordination'] Command:[docker compose --project-name roottestfetchpartitionfromauxiliaryzookeeper-gw9 --env-file /ClickHouse/tests/integration/test_fetch_partition_from_auxiliary_zookeeper/_instances-0-gw9/.env --file /ClickHouse/tests/integration/helpers/../../../tests/integration/compose/docker_compose_keeper.yml --verbose up -d] test_encrypted_disk/test.py::test_backup_restore[File-s3_encrypted_default_path-encrypted_policy-False] Executing query CREATE TABLE encrypted_test ( id Int64, data String ) ENGINE=MergeTree() ORDER BY id SETTINGS storage_policy='s3_encrypted_default_path' on node Executing query INSERT INTO test.graphite (metric, value, timestamp, date, updated) VALUES ('one_min.x1', 100, 1743560829, '2025-04-02', 1); INSERT INTO test.graphite (metric, value, timestamp, date, updated) VALUES ('one_min.x1', 200, 1743560829, '2025-04-02', 2); on instance [gw0] PASSED test_insert_distributed_async_send/test.py::test_insert_distributed_async_send_corrupted_small[0] test_insert_distributed_async_send/test.py::test_insert_distributed_async_send_corrupted_small[1] Executing query DROP TABLE IF EXISTS data on n1 Executing query INSERT INTO encrypted_test VALUES (0,'data'),(1,'data') on node Executing query SELECT * FROM test.graphite ORDER BY updated on instance Executing query select 20 on node Executing query DROP TABLE IF EXISTS dist on n1 Stderr: Network roottestinputformatparallelparsingmemorytracking-gw7_default Creating Stderr: Network roottestinputformatparallelparsingmemorytracking-gw7_default Created Stderr: Container roottestinputformatparallelparsingmemorytracking-gw7-instance-1 Creating Stderr: Container roottestinputformatparallelparsingmemorytracking-gw7-instance-1 Created Stderr: Container roottestinputformatparallelparsingmemorytracking-gw7-instance-1 Starting Stderr: Container roottestinputformatparallelparsingmemorytracking-gw7-instance-1 Started ClickHouse instance created get_instance_ip instance_name=instance http://localhost:None "GET /v1.46/containers/roottestinputformatparallelparsingmemorytracking-gw7-instance-1/json HTTP/1.1" 200 None get_instance_ip instance_name=instance http://localhost:None "GET /v1.46/containers/roottestinputformatparallelparsingmemorytracking-gw7-instance-1/json HTTP/1.1" 200 None Waiting for ClickHouse start in instance, ip: 172.16.2.2... http://localhost:None "GET /v1.46/containers/roottestinputformatparallelparsingmemorytracking-gw7-instance-1/json HTTP/1.1" 200 None http://localhost:None "GET /v1.46/containers/28f42757a7d5221914d45e2b1a9e839ce47eb3464260097caa060f478f9c5463/json HTTP/1.1" 200 None Executing query SELECT * FROM encrypted_test ORDER BY id FORMAT Values on node Executing query OPTIMIZE TABLE test.graphite on instance Executing query DROP TABLE IF EXISTS data on n2 Executing query SELECT count() FROM distributed on node http://localhost:None "GET /v1.46/containers/28f42757a7d5221914d45e2b1a9e839ce47eb3464260097caa060f478f9c5463/json HTTP/1.1" 200 None Stderr: Network roottesthttpandreadonly-gw6_default Creating Stderr: Network roottesthttpandreadonly-gw6_default Created Stderr: Container roottesthttpandreadonly-gw6-instance-1 Creating Stderr: Container roottesthttpandreadonly-gw6-instance-1 Created Stderr: Container roottesthttpandreadonly-gw6-instance-1 Starting Stderr: Container roottesthttpandreadonly-gw6-instance-1 Started ClickHouse instance created get_instance_ip instance_name=instance http://localhost:None "GET /v1.46/containers/roottesthttpandreadonly-gw6-instance-1/json HTTP/1.1" 200 None get_instance_ip instance_name=instance http://localhost:None "GET /v1.46/containers/roottesthttpandreadonly-gw6-instance-1/json HTTP/1.1" 200 None Waiting for ClickHouse start in instance, ip: 172.16.3.2... http://localhost:None "GET /v1.46/containers/roottesthttpandreadonly-gw6-instance-1/json HTTP/1.1" 200 None http://localhost:None "GET /v1.46/containers/3361586f936bd1835176846c2a9dcb8df9691a516b16593506272f0c97c9ce6a/json HTTP/1.1" 200 None http://localhost:None "GET /v1.46/containers/28f42757a7d5221914d45e2b1a9e839ce47eb3464260097caa060f478f9c5463/json HTTP/1.1" 200 None Executing query SELECT * FROM test.graphite on instance http://localhost:None "GET /v1.46/containers/3361586f936bd1835176846c2a9dcb8df9691a516b16593506272f0c97c9ce6a/json HTTP/1.1" 200 None Executing query DROP TABLE IF EXISTS dist on n2 http://localhost:None "GET /v1.46/containers/28f42757a7d5221914d45e2b1a9e839ce47eb3464260097caa060f478f9c5463/json HTTP/1.1" 200 None Executing query BACKUP TABLE encrypted_test TO File('/backups/backup4/') SETTINGS decrypt_files_from_encrypted_disks=0 on node http://localhost:None "GET /v1.46/containers/3361586f936bd1835176846c2a9dcb8df9691a516b16593506272f0c97c9ce6a/json HTTP/1.1" 200 None http://localhost:None "GET /v1.46/containers/28f42757a7d5221914d45e2b1a9e839ce47eb3464260097caa060f478f9c5463/json HTTP/1.1" 200 None http://localhost:None "GET /v1.46/containers/3361586f936bd1835176846c2a9dcb8df9691a516b16593506272f0c97c9ce6a/json HTTP/1.1" 200 None http://localhost:None "GET /v1.46/containers/28f42757a7d5221914d45e2b1a9e839ce47eb3464260097caa060f478f9c5463/json HTTP/1.1" 200 None http://localhost:None "GET /v1.46/containers/3361586f936bd1835176846c2a9dcb8df9691a516b16593506272f0c97c9ce6a/json HTTP/1.1" 200 None http://localhost:None "GET /v1.46/containers/28f42757a7d5221914d45e2b1a9e839ce47eb3464260097caa060f478f9c5463/json HTTP/1.1" 200 None Stderr:time="2025-04-02T02:27:09Z" level=trace msg="Docker Desktop integration not enabled" Stderr: Network roottestjbodha-gw4_default Creating Stderr: Network roottestjbodha-gw4_default Created Stderr: Container roottestjbodha-gw4-zoo2-1 Creating Stderr: Container roottestjbodha-gw4-zoo3-1 Creating Stderr: Container roottestjbodha-gw4-zoo1-1 Creating Stderr: Container roottestjbodha-gw4-zoo2-1 Created Stderr: Container roottestjbodha-gw4-zoo3-1 Created Stderr: Container roottestjbodha-gw4-zoo1-1 Created Stderr: Container roottestjbodha-gw4-zoo1-1 Starting Stderr: Container roottestjbodha-gw4-zoo2-1 Starting Stderr: Container roottestjbodha-gw4-zoo3-1 Starting Stderr: Container roottestjbodha-gw4-zoo2-1 Started Stderr: Container roottestjbodha-gw4-zoo1-1 Started Stderr: Container roottestjbodha-gw4-zoo3-1 Started Stderr:time="2025-04-02T02:27:10Z" level=debug msg="otel error" error="" Stderr:time="2025-04-02T02:27:10Z" level=debug msg="otel error" error="" Wait ZooKeeper to start get_instance_ip instance_name=zoo1 http://localhost:None "GET /v1.46/containers/roottestjbodha-gw4-zoo1-1/json HTTP/1.1" 200 None get_kazoo_client: zoo1, ip:172.16.7.4, port:2181, use_ssl:False Connecting to 172.16.7.4(172.16.7.4):2181, use_ssl: False Connection dropped: socket connection error: Connection refused Executing query DROP TABLE IF EXISTS data on n3 Executing query DROP TABLE encrypted_test SYNC on node Executing query DROP TABLE test.graphite on instance [gw5] PASSED test_graphite_merge_tree/test.py::test_rollup_versions http://localhost:None "GET /v1.46/containers/3361586f936bd1835176846c2a9dcb8df9691a516b16593506272f0c97c9ce6a/json HTTP/1.1" 200 None http://localhost:None "GET /v1.46/containers/28f42757a7d5221914d45e2b1a9e839ce47eb3464260097caa060f478f9c5463/json HTTP/1.1" 200 None Connecting to 172.16.7.4(172.16.7.4):2181, use_ssl: False Connection dropped: socket connection error: Connection refused http://localhost:None "GET /v1.46/containers/3361586f936bd1835176846c2a9dcb8df9691a516b16593506272f0c97c9ce6a/json HTTP/1.1" 200 None http://localhost:None "GET /v1.46/containers/28f42757a7d5221914d45e2b1a9e839ce47eb3464260097caa060f478f9c5463/json HTTP/1.1" 200 None Connecting to 172.16.7.4(172.16.7.4):2181, use_ssl: False Connection dropped: socket connection error: Connection refused http://localhost:None "GET /v1.46/containers/3361586f936bd1835176846c2a9dcb8df9691a516b16593506272f0c97c9ce6a/json HTTP/1.1" 200 None http://localhost:None "GET /v1.46/containers/28f42757a7d5221914d45e2b1a9e839ce47eb3464260097caa060f478f9c5463/json HTTP/1.1" 200 None Executing query DROP TABLE IF EXISTS test.graphite; CREATE TABLE test.graphite (metric String, value Float64, timestamp UInt32, date Date, updated UInt32) ENGINE = GraphiteMergeTree('graphite_rollup') PARTITION BY toYYYYMM(date) ORDER BY (metric, timestamp) SETTINGS index_granularity=8192; on instance test_graphite_merge_tree/test.py::test_system_graphite_retentions Executing query DROP TABLE IF EXISTS dist on n3 Stderr:time="2025-04-02T02:27:09Z" level=trace msg="Docker Desktop integration not enabled" Stderr: Network roottestfetchpartitionfromauxiliaryzookeeper-gw9_default Creating Stderr: Network roottestfetchpartitionfromauxiliaryzookeeper-gw9_default Created Stderr: Container roottestfetchpartitionfromauxiliaryzookeeper-gw9-zoo3-1 Creating Stderr: Container roottestfetchpartitionfromauxiliaryzookeeper-gw9-zoo1-1 Creating Stderr: Container roottestfetchpartitionfromauxiliaryzookeeper-gw9-zoo2-1 Creating Stderr: Container roottestfetchpartitionfromauxiliaryzookeeper-gw9-zoo1-1 Created Stderr: Container roottestfetchpartitionfromauxiliaryzookeeper-gw9-zoo2-1 Created Stderr: Container roottestfetchpartitionfromauxiliaryzookeeper-gw9-zoo3-1 Created Stderr: Container roottestfetchpartitionfromauxiliaryzookeeper-gw9-zoo3-1 Starting Stderr: Container roottestfetchpartitionfromauxiliaryzookeeper-gw9-zoo1-1 Starting Stderr: Container roottestfetchpartitionfromauxiliaryzookeeper-gw9-zoo2-1 Starting Stderr: Container roottestfetchpartitionfromauxiliaryzookeeper-gw9-zoo1-1 Started Stderr: Container roottestfetchpartitionfromauxiliaryzookeeper-gw9-zoo2-1 Started Stderr: Container roottestfetchpartitionfromauxiliaryzookeeper-gw9-zoo3-1 Started Stderr:time="2025-04-02T02:27:10Z" level=debug msg="otel error" error="" Stderr:time="2025-04-02T02:27:10Z" level=debug msg="otel error" error="" Wait ZooKeeper to start get_instance_ip instance_name=zoo1 http://localhost:None "GET /v1.46/containers/roottestfetchpartitionfromauxiliaryzookeeper-gw9-zoo1-1/json HTTP/1.1" 200 None get_kazoo_client: zoo1, ip:172.16.9.2, port:2181, use_ssl:False Connecting to 172.16.9.2(172.16.9.2):2181, use_ssl: False Connection dropped: socket connection error: Connection refused Executing query CREATE TABLE encrypted_test ( id Int64, data String ) ENGINE=MergeTree() ORDER BY id SETTINGS storage_policy='encrypted_policy' on node Connecting to 172.16.7.4(172.16.7.4):2181, use_ssl: False Connection dropped: socket connection error: Connection refused http://localhost:None "GET /v1.46/containers/3361586f936bd1835176846c2a9dcb8df9691a516b16593506272f0c97c9ce6a/json HTTP/1.1" 200 None http://localhost:None "GET /v1.46/containers/28f42757a7d5221914d45e2b1a9e839ce47eb3464260097caa060f478f9c5463/json HTTP/1.1" 200 None Connecting to 172.16.9.2(172.16.9.2):2181, use_ssl: False Connection dropped: socket connection error: Connection refused Executing query DROP TABLE IF EXISTS data on n4 http://localhost:None "GET /v1.46/containers/3361586f936bd1835176846c2a9dcb8df9691a516b16593506272f0c97c9ce6a/json HTTP/1.1" 200 None Executing query RESTORE TABLE encrypted_test FROM File('/backups/backup4/') SETTINGS allow_different_table_def=1 on node http://localhost:None "GET /v1.46/containers/28f42757a7d5221914d45e2b1a9e839ce47eb3464260097caa060f478f9c5463/json HTTP/1.1" 200 None ClickHouse instance started Executing query SELECT value FROM system.build_options WHERE name = 'CXX_FLAGS' on instance Executing query SELECT * from system.graphite_retentions on instance Connecting to 172.16.7.4(172.16.7.4):2181, use_ssl: False Connection dropped: socket connection error: Connection refused Connecting to 172.16.9.2(172.16.9.2):2181, use_ssl: False Connection dropped: socket connection error: Connection refused http://localhost:None "GET /v1.46/containers/3361586f936bd1835176846c2a9dcb8df9691a516b16593506272f0c97c9ce6a/json HTTP/1.1" 200 None Executing query DROP TABLE IF EXISTS dist on n4 http://localhost:None "GET /v1.46/containers/3361586f936bd1835176846c2a9dcb8df9691a516b16593506272f0c97c9ce6a/json HTTP/1.1" 200 None Executing query SELECT * FROM encrypted_test ORDER BY id FORMAT Values on node Executing query CREATE TABLE null (row String) ENGINE=Null on instance Executing query DROP TABLE IF EXISTS test.graphite2; CREATE TABLE test.graphite2 (metric String, value Float64, timestamp UInt32, date Date, updated UInt32) ENGINE = GraphiteMergeTree('graphite_rollup') PARTITION BY toYYYYMM(date) ORDER BY (metric, timestamp) SETTINGS index_granularity=8192; on instance http://localhost:None "GET /v1.46/containers/3361586f936bd1835176846c2a9dcb8df9691a516b16593506272f0c97c9ce6a/json HTTP/1.1" 200 None Connecting to 172.16.9.2(172.16.9.2):2181, use_ssl: False Connection dropped: socket connection error: Connection refused Executing query CREATE TABLE data (key Int, value String) Engine=MergeTree() ORDER BY key on n1 run container_id:roottestinputformatparallelparsingmemorytracking-gw7-instance-1 detach:False nothrow:False cmd: ['bash', '-c', 'clickhouse local -q "SELECT arrayStringConcat(arrayMap(x->toString(cityHash64(x)), range(1000)), \' \') from numbers(10000)" > data.jsonl'] Command:[docker exec roottestinputformatparallelparsingmemorytracking-gw7-instance-1 bash -c clickhouse local -q "SELECT arrayStringConcat(arrayMap(x->toString(cityHash64(x)), range(1000)), ' ') from numbers(10000)" > data.jsonl] http://localhost:None "GET /v1.46/containers/3361586f936bd1835176846c2a9dcb8df9691a516b16593506272f0c97c9ce6a/json HTTP/1.1" 200 None ClickHouse instance started Executing query CREATE TABLE xxx (a Date) ENGINE = MergeTree(a, a, 256) on instance via HTTP interface Starting new HTTP connection (1): 172.16.3.2:8123 Executing query DROP TABLE IF EXISTS encrypted_test SYNC on node [gw1] PASSED test_encrypted_disk/test.py::test_backup_restore[File-s3_encrypted_default_path-encrypted_policy-False] Executing query SELECT config_name, Tables.database, Tables.table FROM system.graphite_retentions on instance Connecting to 172.16.7.4(172.16.7.4):2181, use_ssl: False Connection dropped: socket connection error: Connection refused http://172.16.3.2:8123 "GET /?query=CREATE+TABLE+xxx+%28a+Date%29+ENGINE+%3D+MergeTree%28a%2C+a%2C+256%29 HTTP/1.1" 500 None Executing query CREATE TABLE xxx (a Date) ENGINE = MergeTree(a, a, 256) on instance via HTTP interface Starting new HTTP connection (1): 172.16.3.2:8123 http://172.16.3.2:8123 "GET /?readonly=0&query=CREATE+TABLE+xxx+%28a+Date%29+ENGINE+%3D+MergeTree%28a%2C+a%2C+256%29 HTTP/1.1" 500 None Command:[docker compose --env-file /ClickHouse/tests/integration/test_http_and_readonly/_instances-0-gw6/.env --project-name roottesthttpandreadonly-gw6 --file /ClickHouse/tests/integration/test_http_and_readonly/_instances-0-gw6/instance/docker-compose.yml stop --timeout 20] [gw6] PASSED test_http_and_readonly/test.py::test_http_get_is_readonly Executing query CREATE TABLE dist AS data Engine=Distributed( insert_distributed_async_send_cluster_two_replicas, currentDatabase(), data, key ) on n1 test_encrypted_disk/test.py::test_backup_restore[S3-encrypted_policy-encrypted_policy-False] Executing query CREATE TABLE encrypted_test ( id Int64, data String ) ENGINE=MergeTree() ORDER BY id SETTINGS storage_policy='encrypted_policy' on node Executing query DROP TABLE test.graphite on instance [gw5] PASSED test_graphite_merge_tree/test.py::test_system_graphite_retentions Connecting to 172.16.9.2(172.16.9.2):2181, use_ssl: False Connection dropped: socket connection error: Connection refused Executing query SYSTEM STOP DISTRIBUTED SENDS dist on n1 Executing query INSERT INTO encrypted_test VALUES (0,'data'),(1,'data') on node test_graphite_merge_tree/test.py::test_wrong_rollup_config Executing query DROP TABLE IF EXISTS test.graphite; CREATE TABLE test.graphite (metric String, value Float64, timestamp UInt32, date Date, updated UInt32) ENGINE = GraphiteMergeTree('graphite_rollup') PARTITION BY toYYYYMM(date) ORDER BY (metric, timestamp) SETTINGS index_granularity=8192; on instance Executing query CREATE TABLE data (key Int, value String) Engine=MergeTree() ORDER BY key on n2 Executing query SELECT * FROM encrypted_test ORDER BY id FORMAT Values on node Executing query CREATE TABLE test.graphite_not_created (metric String, value Float64, timestamp UInt32, date Date, updated UInt32) ENGINE = GraphiteMergeTree('graphite_rollup_wrong_age_precision') PARTITION BY toYYYYMM(date) ORDER BY (metric, timestamp) SETTINGS index_granularity=1; on instance Executing query CREATE TABLE dist AS data Engine=Distributed( insert_distributed_async_send_cluster_two_replicas, currentDatabase(), data, key ) on n2 Executing query BACKUP TABLE encrypted_test TO S3('http://minio1:9001/root/backups/backup5', 'minio', 'minio123') SETTINGS decrypt_files_from_encrypted_disks=0 on node Connecting to 172.16.7.4(172.16.7.4):2181, use_ssl: False Connection dropped: socket connection error: Connection refused Executing query DROP TABLE test.graphite on instance [gw5] PASSED test_graphite_merge_tree/test.py::test_wrong_rollup_config Executing query SYSTEM STOP DISTRIBUTED SENDS dist on n2 Connecting to 172.16.9.2(172.16.9.2):2181, use_ssl: False Connection dropped: socket connection error: Connection refused Executing query DROP TABLE encrypted_test SYNC on node Stderr: zoo3 Skipped - Image is already being pulled by zoo2 Stderr: node1 Skipped - Image is already being pulled by node2 Stderr: zoo1 Skipped - Image is already being pulled by zoo2 Stderr: node2 Pulling Stderr: zoo2 Pulling Stderr: node2 Pulled Stderr: 69692152171a Pulling fs layer Stderr: ce2b89b60818 Pulling fs layer Stderr: 6584437267ff Pulling fs layer Stderr: b6500b56ee97 Pulling fs layer Stderr: e0bc610d4c8b Pulling fs layer Stderr: 160d95f4d581 Pulling fs layer Stderr: 76326c39ca4a Pulling fs layer Stderr: 95897a729488 Pulling fs layer Stderr: b6500b56ee97 Waiting Stderr: e0bc610d4c8b Waiting Stderr: 160d95f4d581 Waiting Stderr: 76326c39ca4a Waiting Stderr: 95897a729488 Waiting Stderr: 6584437267ff Downloading [==================================================>] 211B/211B Stderr: 6584437267ff Verifying Checksum Stderr: 6584437267ff Download complete Stderr: ce2b89b60818 Downloading [> ] 42.74kB/3.269MB Stderr: 69692152171a Downloading [> ] 303.8kB/27.15MB Stderr: ce2b89b60818 Verifying Checksum Stderr: ce2b89b60818 Download complete Stderr: e0bc610d4c8b Downloading [==================================================>] 1.875kB/1.875kB Stderr: e0bc610d4c8b Verifying Checksum Stderr: e0bc610d4c8b Download complete Stderr: b6500b56ee97 Downloading [> ] 497.4kB/47.08MB Stderr: 160d95f4d581 Downloading [> ] 77.49kB/5.385MB Stderr: 160d95f4d581 Verifying Checksum Stderr: 160d95f4d581 Download complete Stderr: 76326c39ca4a Downloading [> ] 141.2kB/12.43MB Stderr: 69692152171a Downloading [===============================> ] 17.06MB/27.15MB Stderr: b6500b56ee97 Downloading [==================> ] 17.16MB/47.08MB Stderr: 69692152171a Verifying Checksum Stderr: 69692152171a Download complete Stderr: 95897a729488 Downloading [==================================================>] 776B/776B Stderr: 95897a729488 Verifying Checksum Stderr: 95897a729488 Download complete Stderr: 69692152171a Extracting [> ] 294.9kB/27.15MB Stderr: 76326c39ca4a Verifying Checksum Stderr: 76326c39ca4a Download complete Stderr: b6500b56ee97 Downloading [======================================> ] 35.84MB/47.08MB Stderr: 69692152171a Extracting [====> ] 2.654MB/27.15MB Stderr: b6500b56ee97 Verifying Checksum Stderr: b6500b56ee97 Download complete Stderr: 69692152171a Extracting [===============> ] 8.258MB/27.15MB Stderr: 69692152171a Extracting [========================> ] 13.27MB/27.15MB Stderr: 69692152171a Extracting [==============================> ] 16.52MB/27.15MB Stderr: 69692152171a Extracting [=================================> ] 18.28MB/27.15MB Stderr: 69692152171a Extracting [=============================================> ] 24.48MB/27.15MB Stderr: 69692152171a Extracting [=============================================> ] 24.77MB/27.15MB Stderr: 69692152171a Extracting [===============================================> ] 25.66MB/27.15MB Stderr: 69692152171a Extracting [================================================> ] 26.25MB/27.15MB Stderr: 69692152171a Extracting [=================================================> ] 26.84MB/27.15MB Stderr: 69692152171a Extracting [==================================================>] 27.15MB/27.15MB Stderr: 69692152171a Pull complete Stderr: ce2b89b60818 Extracting [> ] 32.77kB/3.269MB Stderr: ce2b89b60818 Extracting [============================================> ] 2.884MB/3.269MB Stderr: ce2b89b60818 Extracting [==================================================>] 3.269MB/3.269MB Stderr: ce2b89b60818 Pull complete Stderr: 6584437267ff Extracting [==================================================>] 211B/211B Stderr: 6584437267ff Extracting [==================================================>] 211B/211B Stderr: 6584437267ff Pull complete Stderr: b6500b56ee97 Extracting [> ] 491.5kB/47.08MB Stderr: b6500b56ee97 Extracting [========> ] 7.864MB/47.08MB Stderr: b6500b56ee97 Extracting [=================> ] 16.71MB/47.08MB Stderr: b6500b56ee97 Extracting [=========================> ] 23.59MB/47.08MB Stderr: b6500b56ee97 Extracting [===============================> ] 29.98MB/47.08MB Stderr: b6500b56ee97 Extracting [=======================================> ] 36.86MB/47.08MB Stderr: b6500b56ee97 Extracting [===============================================> ] 44.73MB/47.08MB Stderr: b6500b56ee97 Extracting [==================================================>] 47.08MB/47.08MB Stderr: b6500b56ee97 Pull complete Stderr: e0bc610d4c8b Extracting [==================================================>] 1.875kB/1.875kB Stderr: e0bc610d4c8b Extracting [==================================================>] 1.875kB/1.875kB Stderr: e0bc610d4c8b Pull complete Stderr: 160d95f4d581 Extracting [> ] 65.54kB/5.385MB Stderr: 160d95f4d581 Extracting [=================================================> ] 5.308MB/5.385MB Stderr: 160d95f4d581 Extracting [==================================================>] 5.385MB/5.385MB Stderr: 160d95f4d581 Pull complete Stderr: 76326c39ca4a Extracting [> ] 131.1kB/12.43MB Stderr: 76326c39ca4a Extracting [=> ] 393.2kB/12.43MB Stderr: 76326c39ca4a Extracting [===> ] 786.4kB/12.43MB Stderr: 76326c39ca4a Extracting [====> ] 1.18MB/12.43MB Stderr: 76326c39ca4a Extracting [========> ] 2.228MB/12.43MB Stderr: 76326c39ca4a Extracting [==================================================>] 12.43MB/12.43MB Stderr: 76326c39ca4a Pull complete Stderr: 95897a729488 Extracting [==================================================>] 776B/776B Stderr: 95897a729488 Extracting [==================================================>] 776B/776B Stderr: 95897a729488 Pull complete Stderr: zoo2 Pulled Setup ZooKeeper Creating internal ZooKeeper dirs: ['/ClickHouse/tests/integration/test_drop_replica_with_auxiliary_zookeepers/_instances-0-gw8/zk1/data', '/ClickHouse/tests/integration/test_drop_replica_with_auxiliary_zookeepers/_instances-0-gw8/zk1/log', '/ClickHouse/tests/integration/test_drop_replica_with_auxiliary_zookeepers/_instances-0-gw8/zk2/data', '/ClickHouse/tests/integration/test_drop_replica_with_auxiliary_zookeepers/_instances-0-gw8/zk2/log', '/ClickHouse/tests/integration/test_drop_replica_with_auxiliary_zookeepers/_instances-0-gw8/zk3/data', '/ClickHouse/tests/integration/test_drop_replica_with_auxiliary_zookeepers/_instances-0-gw8/zk3/log'] Command:[docker compose --project-name roottestdropreplicawithauxiliaryzookeepers-gw8 --env-file /ClickHouse/tests/integration/test_drop_replica_with_auxiliary_zookeepers/_instances-0-gw8/.env --file /ClickHouse/tests/integration/helpers/../../../tests/integration/compose/docker_compose_zookeeper.yml --verbose up -d] Executing query CREATE TABLE data (key Int, value String) Engine=MergeTree() ORDER BY key on n3 Command:[docker compose --env-file /ClickHouse/tests/integration/test_graphite_merge_tree/_instances-0-gw5/.env --project-name roottestgraphitemergetree-gw5 --file /ClickHouse/tests/integration/test_graphite_merge_tree/_instances-0-gw5/instance/docker-compose.yml stop --timeout 20] Executing query RESTORE TABLE encrypted_test FROM S3('http://minio1:9001/root/backups/backup5', 'minio', 'minio123') SETTINGS allow_different_table_def=0 on node Executing query CREATE TABLE dist AS data Engine=Distributed( insert_distributed_async_send_cluster_two_replicas, currentDatabase(), data, key ) on n3 Executing query SELECT * FROM encrypted_test ORDER BY id FORMAT Values on node Executing query SYSTEM STOP DISTRIBUTED SENDS dist on n3 run container_id:roottestinputformatparallelparsingmemorytracking-gw7-instance-1 detach:False nothrow:False cmd: ['curl', '--silent', '--show-error', '--data-binary', '@data.json', 'http://127.1:8123/?query=INSERT%20INTO%20null%20FORMAT%20JSONEachRow'] Command:[docker exec roottestinputformatparallelparsingmemorytracking-gw7-instance-1 curl --silent --show-error --data-binary @data.json http://127.1:8123/?query=INSERT%20INTO%20null%20FORMAT%20JSONEachRow] run container_id:roottestinputformatparallelparsingmemorytracking-gw7-instance-1 detach:False nothrow:False cmd: ['curl', '--silent', '--show-error', '--data-binary', '@data.json', 'http://127.1:8123/?query=INSERT%20INTO%20null%20FORMAT%20JSONEachRow'] Command:[docker exec roottestinputformatparallelparsingmemorytracking-gw7-instance-1 curl --silent --show-error --data-binary @data.json http://127.1:8123/?query=INSERT%20INTO%20null%20FORMAT%20JSONEachRow] run container_id:roottestinputformatparallelparsingmemorytracking-gw7-instance-1 detach:False nothrow:False cmd: ['curl', '--silent', '--show-error', '--data-binary', '@data.json', 'http://127.1:8123/?query=INSERT%20INTO%20null%20FORMAT%20JSONEachRow'] Command:[docker exec roottestinputformatparallelparsingmemorytracking-gw7-instance-1 curl --silent --show-error --data-binary @data.json http://127.1:8123/?query=INSERT%20INTO%20null%20FORMAT%20JSONEachRow] run container_id:roottestinputformatparallelparsingmemorytracking-gw7-instance-1 detach:False nothrow:False cmd: ['curl', '--silent', '--show-error', '--data-binary', '@data.json', 'http://127.1:8123/?query=INSERT%20INTO%20null%20FORMAT%20JSONEachRow'] Command:[docker exec roottestinputformatparallelparsingmemorytracking-gw7-instance-1 curl --silent --show-error --data-binary @data.json http://127.1:8123/?query=INSERT%20INTO%20null%20FORMAT%20JSONEachRow] run container_id:roottestinputformatparallelparsingmemorytracking-gw7-instance-1 detach:False nothrow:False cmd: ['curl', '--silent', '--show-error', '--data-binary', '@data.json', 'http://127.1:8123/?query=INSERT%20INTO%20null%20FORMAT%20JSONEachRow'] Command:[docker exec roottestinputformatparallelparsingmemorytracking-gw7-instance-1 curl --silent --show-error --data-binary @data.json http://127.1:8123/?query=INSERT%20INTO%20null%20FORMAT%20JSONEachRow] run container_id:roottestinputformatparallelparsingmemorytracking-gw7-instance-1 detach:False nothrow:False cmd: ['curl', '--silent', '--show-error', '--data-binary', '@data.json', 'http://127.1:8123/?query=INSERT%20INTO%20null%20FORMAT%20JSONEachRow'] Command:[docker exec roottestinputformatparallelparsingmemorytracking-gw7-instance-1 curl --silent --show-error --data-binary @data.json http://127.1:8123/?query=INSERT%20INTO%20null%20FORMAT%20JSONEachRow] run container_id:roottestinputformatparallelparsingmemorytracking-gw7-instance-1 detach:False nothrow:False cmd: ['curl', '--silent', '--show-error', '--data-binary', '@data.json', 'http://127.1:8123/?query=INSERT%20INTO%20null%20FORMAT%20JSONEachRow'] Command:[docker exec roottestinputformatparallelparsingmemorytracking-gw7-instance-1 curl --silent --show-error --data-binary @data.json http://127.1:8123/?query=INSERT%20INTO%20null%20FORMAT%20JSONEachRow] run container_id:roottestinputformatparallelparsingmemorytracking-gw7-instance-1 detach:False nothrow:False cmd: ['curl', '--silent', '--show-error', '--data-binary', '@data.json', 'http://127.1:8123/?query=INSERT%20INTO%20null%20FORMAT%20JSONEachRow'] Command:[docker exec roottestinputformatparallelparsingmemorytracking-gw7-instance-1 curl --silent --show-error --data-binary @data.json http://127.1:8123/?query=INSERT%20INTO%20null%20FORMAT%20JSONEachRow] Executing query CREATE TABLE data (key Int, value String) Engine=MergeTree() ORDER BY key on n4 Executing query DROP TABLE IF EXISTS encrypted_test SYNC on node [gw1] PASSED test_encrypted_disk/test.py::test_backup_restore[S3-encrypted_policy-encrypted_policy-False] run container_id:roottestinputformatparallelparsingmemorytracking-gw7-instance-1 detach:False nothrow:False cmd: ['curl', '--silent', '--show-error', '--data-binary', '@data.json', 'http://127.1:8123/?query=INSERT%20INTO%20null%20FORMAT%20JSONEachRow'] Command:[docker exec roottestinputformatparallelparsingmemorytracking-gw7-instance-1 curl --silent --show-error --data-binary @data.json http://127.1:8123/?query=INSERT%20INTO%20null%20FORMAT%20JSONEachRow] Stderr:time="2025-04-02T02:27:12Z" level=trace msg="Docker Desktop integration not enabled" Stderr: Network roottestdropreplicawithauxiliaryzookeepers-gw8_default Creating Stderr: Network roottestdropreplicawithauxiliaryzookeepers-gw8_default Created Stderr: Container roottestdropreplicawithauxiliaryzookeepers-gw8-zoo1-1 Creating Stderr: Container roottestdropreplicawithauxiliaryzookeepers-gw8-zoo3-1 Creating Stderr: Container roottestdropreplicawithauxiliaryzookeepers-gw8-zoo2-1 Creating Stderr: Container roottestdropreplicawithauxiliaryzookeepers-gw8-zoo2-1 Created Stderr: Container roottestdropreplicawithauxiliaryzookeepers-gw8-zoo3-1 Created Stderr: Container roottestdropreplicawithauxiliaryzookeepers-gw8-zoo1-1 Created Stderr: Container roottestdropreplicawithauxiliaryzookeepers-gw8-zoo2-1 Starting Stderr: Container roottestdropreplicawithauxiliaryzookeepers-gw8-zoo1-1 Starting Stderr: Container roottestdropreplicawithauxiliaryzookeepers-gw8-zoo3-1 Starting Stderr: Container roottestdropreplicawithauxiliaryzookeepers-gw8-zoo1-1 Started Stderr: Container roottestdropreplicawithauxiliaryzookeepers-gw8-zoo2-1 Started Stderr: Container roottestdropreplicawithauxiliaryzookeepers-gw8-zoo3-1 Started Stderr:time="2025-04-02T02:27:13Z" level=debug msg="otel error" error="" Stderr:time="2025-04-02T02:27:13Z" level=debug msg="otel error" error="" Wait ZooKeeper to start get_instance_ip instance_name=zoo1 http://localhost:None "GET /v1.46/containers/roottestdropreplicawithauxiliaryzookeepers-gw8-zoo1-1/json HTTP/1.1" 200 None get_kazoo_client: zoo1, ip:172.16.10.3, port:2181, use_ssl:False Connecting to 172.16.10.3(172.16.10.3):2181, use_ssl: False Connection dropped: socket connection error: Connection refused run container_id:roottestinputformatparallelparsingmemorytracking-gw7-instance-1 detach:False nothrow:False cmd: ['curl', '--silent', '--show-error', '--data-binary', '@data.json', 'http://127.1:8123/?query=INSERT%20INTO%20null%20FORMAT%20JSONEachRow'] Command:[docker exec roottestinputformatparallelparsingmemorytracking-gw7-instance-1 curl --silent --show-error --data-binary @data.json http://127.1:8123/?query=INSERT%20INTO%20null%20FORMAT%20JSONEachRow] Connecting to 172.16.10.3(172.16.10.3):2181, use_ssl: False Connection dropped: socket connection error: Connection refused Stderr: Container roottestfilecluster-gw2-s0_0_0-1 Stopping Stderr: Container roottestfilecluster-gw2-s0_0_1-1 Stopping Stderr: Container roottestfilecluster-gw2-s0_1_0-1 Stopping Stderr: Container roottestfilecluster-gw2-s0_0_0-1 Stopped Stderr: Container roottestfilecluster-gw2-s0_1_0-1 Stopped Stderr: Container roottestfilecluster-gw2-s0_0_1-1 Stopped Stderr: Container roottestfilecluster-gw2-zoo2-1 Stopping Stderr: Container roottestfilecluster-gw2-zoo3-1 Stopping Stderr: Container roottestfilecluster-gw2-zoo1-1 Stopping Stderr: Container roottestfilecluster-gw2-zoo3-1 Stopped Stderr: Container roottestfilecluster-gw2-zoo1-1 Stopped Stderr: Container roottestfilecluster-gw2-zoo2-1 Stopped Command:[bash -c [ -f /ClickHouse/tests/integration/test_file_cluster/_instances-0-gw2/s0_0_0/logs/stderr.log ] && zgrep -aH "==================" /ClickHouse/tests/integration/test_file_cluster/_instances-0-gw2/s0_0_0/logs/stderr.log* | ( [ -z "" ] && cat || grep -v "$" ) || true] Command:[bash -c [ -f /ClickHouse/tests/integration/test_file_cluster/_instances-0-gw2/s0_0_1/logs/stderr.log ] && zgrep -aH "==================" /ClickHouse/tests/integration/test_file_cluster/_instances-0-gw2/s0_0_1/logs/stderr.log* | ( [ -z "" ] && cat || grep -v "$" ) || true] Command:[bash -c [ -f /ClickHouse/tests/integration/test_file_cluster/_instances-0-gw2/s0_1_0/logs/stderr.log ] && zgrep -aH "==================" /ClickHouse/tests/integration/test_file_cluster/_instances-0-gw2/s0_1_0/logs/stderr.log* | ( [ -z "" ] && cat || grep -v "$" ) || true] Command:[docker compose --env-file /ClickHouse/tests/integration/test_file_cluster/_instances-0-gw2/.env --project-name roottestfilecluster-gw2 --file /ClickHouse/tests/integration/test_file_cluster/_instances-0-gw2/s0_0_0/docker-compose.yml --file /ClickHouse/tests/integration/helpers/../../../tests/integration/compose/docker_compose_keeper.yml --file /ClickHouse/tests/integration/test_file_cluster/_instances-0-gw2/s0_0_1/docker-compose.yml --file /ClickHouse/tests/integration/test_file_cluster/_instances-0-gw2/s0_1_0/docker-compose.yml down --volumes] run container_id:roottestinputformatparallelparsingmemorytracking-gw7-instance-1 detach:False nothrow:False cmd: ['curl', '--silent', '--show-error', '--data-binary', '@data.json', 'http://127.1:8123/?query=INSERT%20INTO%20null%20FORMAT%20JSONEachRow'] Command:[docker exec roottestinputformatparallelparsingmemorytracking-gw7-instance-1 curl --silent --show-error --data-binary @data.json http://127.1:8123/?query=INSERT%20INTO%20null%20FORMAT%20JSONEachRow] run container_id:roottestinputformatparallelparsingmemorytracking-gw7-instance-1 detach:False nothrow:False cmd: ['curl', '--silent', '--show-error', '--data-binary', '@data.json', 'http://127.1:8123/?query=INSERT%20INTO%20null%20FORMAT%20JSONEachRow'] Command:[docker exec roottestinputformatparallelparsingmemorytracking-gw7-instance-1 curl --silent --show-error --data-binary @data.json http://127.1:8123/?query=INSERT%20INTO%20null%20FORMAT%20JSONEachRow] Connecting to 172.16.10.3(172.16.10.3):2181, use_ssl: False Connection dropped: socket connection error: Connection refused Executing query CREATE TABLE dist AS data Engine=Distributed( insert_distributed_async_send_cluster_two_replicas, currentDatabase(), data, key ) on n4 Executing query CREATE TABLE encrypted_test ( id Int64, data String ) ENGINE=MergeTree() ORDER BY id SETTINGS storage_policy='encrypted_policy' on node test_encrypted_disk/test.py::test_backup_restore[S3-encrypted_policy-s3_encrypted_default_path-False] run container_id:roottestinputformatparallelparsingmemorytracking-gw7-instance-1 detach:False nothrow:False cmd: ['curl', '--silent', '--show-error', '--data-binary', '@data.json', 'http://127.1:8123/?query=INSERT%20INTO%20null%20FORMAT%20JSONEachRow'] Command:[docker exec roottestinputformatparallelparsingmemorytracking-gw7-instance-1 curl --silent --show-error --data-binary @data.json http://127.1:8123/?query=INSERT%20INTO%20null%20FORMAT%20JSONEachRow] Executing query SELECT value FROM system.events WHERE event='HedgedRequestsChangeReplica' on node run container_id:roottestinputformatparallelparsingmemorytracking-gw7-instance-1 detach:False nothrow:False cmd: ['curl', '--silent', '--show-error', '--data-binary', '@data.json', 'http://127.1:8123/?query=INSERT%20INTO%20null%20FORMAT%20JSONEachRow'] Command:[docker exec roottestinputformatparallelparsingmemorytracking-gw7-instance-1 curl --silent --show-error --data-binary @data.json http://127.1:8123/?query=INSERT%20INTO%20null%20FORMAT%20JSONEachRow] run container_id:roottestinputformatparallelparsingmemorytracking-gw7-instance-1 detach:False nothrow:False cmd: ['curl', '--silent', '--show-error', '--data-binary', '@data.json', 'http://127.1:8123/?query=INSERT%20INTO%20null%20FORMAT%20JSONEachRow'] Command:[docker exec roottestinputformatparallelparsingmemorytracking-gw7-instance-1 curl --silent --show-error --data-binary @data.json http://127.1:8123/?query=INSERT%20INTO%20null%20FORMAT%20JSONEachRow] run container_id:roottestinputformatparallelparsingmemorytracking-gw7-instance-1 detach:False nothrow:False cmd: ['curl', '--silent', '--show-error', '--data-binary', '@data.json', 'http://127.1:8123/?query=INSERT%20INTO%20null%20FORMAT%20JSONEachRow'] Command:[docker exec roottestinputformatparallelparsingmemorytracking-gw7-instance-1 curl --silent --show-error --data-binary @data.json http://127.1:8123/?query=INSERT%20INTO%20null%20FORMAT%20JSONEachRow] run container_id:roottestinputformatparallelparsingmemorytracking-gw7-instance-1 detach:False nothrow:False cmd: ['curl', '--silent', '--show-error', '--data-binary', '@data.json', 'http://127.1:8123/?query=INSERT%20INTO%20null%20FORMAT%20JSONEachRow'] Command:[docker exec roottestinputformatparallelparsingmemorytracking-gw7-instance-1 curl --silent --show-error --data-binary @data.json http://127.1:8123/?query=INSERT%20INTO%20null%20FORMAT%20JSONEachRow] Executing query SYSTEM STOP DISTRIBUTED SENDS dist on n4 Connecting to 172.16.9.2(172.16.9.2):2181, use_ssl: False Sending request(xid=None): Connect(protocol_version=0, last_zxid_seen=0, time_out=30000, session_id=0, passwd=b'\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00', read_only=None) Connecting to 172.16.10.3(172.16.10.3):2181, use_ssl: False Sending request(xid=None): Connect(protocol_version=0, last_zxid_seen=0, time_out=30000, session_id=0, passwd=b'\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00', read_only=None) Executing query INSERT INTO encrypted_test VALUES (0,'data'),(1,'data') on node run container_id:roottestinputformatparallelparsingmemorytracking-gw7-instance-1 detach:False nothrow:False cmd: ['curl', '--silent', '--show-error', '--data-binary', '@data.json', 'http://127.1:8123/?query=INSERT%20INTO%20null%20FORMAT%20JSONEachRow'] Command:[docker exec roottestinputformatparallelparsingmemorytracking-gw7-instance-1 curl --silent --show-error --data-binary @data.json http://127.1:8123/?query=INSERT%20INTO%20null%20FORMAT%20JSONEachRow] Zookeeper connection established, state: CONNECTED Sending request(xid=1): GetChildren(path='/', watcher=None) Received response(xid=1): ['keeper'] Sending request(xid=2): Close() Executing query SELECT value FROM system.build_options WHERE name = 'CXX_FLAGS' on node [gw3] PASSED test_hedged_requests_parallel/test.py::test_combination2 test_hedged_requests_parallel/test.py::test_query_with_no_data_to_sample Connection dropped: socket connection broken Transition to CONNECTING Zookeeper connection lost run container_id:roottestinputformatparallelparsingmemorytracking-gw7-instance-1 detach:False nothrow:False cmd: ['curl', '--silent', '--show-error', '--data-binary', '@data.json', 'http://127.1:8123/?query=INSERT%20INTO%20null%20FORMAT%20JSONEachRow'] Command:[docker exec roottestinputformatparallelparsingmemorytracking-gw7-instance-1 curl --silent --show-error --data-binary @data.json http://127.1:8123/?query=INSERT%20INTO%20null%20FORMAT%20JSONEachRow] run container_id:roottestinputformatparallelparsingmemorytracking-gw7-instance-1 detach:False nothrow:False cmd: ['curl', '--silent', '--show-error', '--data-binary', '@data.json', 'http://127.1:8123/?query=INSERT%20INTO%20null%20FORMAT%20JSONEachRow'] Command:[docker exec roottestinputformatparallelparsingmemorytracking-gw7-instance-1 curl --silent --show-error --data-binary @data.json http://127.1:8123/?query=INSERT%20INTO%20null%20FORMAT%20JSONEachRow] Failed connecting to Zookeeper within the connection retry policy. Zookeeper session closed, state: CLOSED get_instance_ip instance_name=zoo2 http://localhost:None "GET /v1.46/containers/roottestfetchpartitionfromauxiliaryzookeeper-gw9-zoo2-1/json HTTP/1.1" 200 None get_kazoo_client: zoo2, ip:172.16.9.3, port:2181, use_ssl:False Connecting to 172.16.9.3(172.16.9.3):2181, use_ssl: False Sending request(xid=None): Connect(protocol_version=0, last_zxid_seen=0, time_out=30000, session_id=0, passwd=b'\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00', read_only=None) Zookeeper connection established, state: CONNECTED Sending request(xid=1): GetChildren(path='/', watcher=None) Received response(xid=1): ['keeper'] Sending request(xid=2): Close() Connection dropped: socket connection broken Transition to CONNECTING Zookeeper connection lost Command:[docker compose --env-file /ClickHouse/tests/integration/test_input_format_parallel_parsing_memory_tracking/_instances-0-gw7/.env --project-name roottestinputformatparallelparsingmemorytracking-gw7 --file /ClickHouse/tests/integration/test_input_format_parallel_parsing_memory_tracking/_instances-0-gw7/instance/docker-compose.yml stop --timeout 20] [gw7] PASSED test_input_format_parallel_parsing_memory_tracking/test.py::test_memory_tracking_total Executing query INSERT INTO dist SELECT number, randomPrintableASCII(100) FROM numbers(10000) on n1 Connection dropped: socket connection broken Failed connecting to Zookeeper within the connection retry policy. Zookeeper session closed, state: CLOSED get_instance_ip instance_name=zoo3 http://localhost:None "GET /v1.46/containers/roottestfetchpartitionfromauxiliaryzookeeper-gw9-zoo3-1/json HTTP/1.1" 200 None get_kazoo_client: zoo3, ip:172.16.9.4, port:2181, use_ssl:False Connecting to 172.16.9.4(172.16.9.4):2181, use_ssl: False Sending request(xid=None): Connect(protocol_version=0, last_zxid_seen=0, time_out=30000, session_id=0, passwd=b'\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00', read_only=None) Executing query SELECT * FROM encrypted_test ORDER BY id FORMAT Values on node Zookeeper connection established, state: CONNECTED Sending request(xid=1): GetChildren(path='/', watcher=None) Received response(xid=1): ['keeper'] Sending request(xid=2): Close() Connection dropped: socket connection broken Transition to CONNECTING Zookeeper connection lost run container_id:roottesthedgedrequestsparallel-gw3-node_1-1 detach:False nothrow:False cmd: ['bash', '-c', "echo '\n \n \n 0\n 30000\n \n \n' > /etc/clickhouse-server/users.d/users1.xml"] Command:[docker exec roottesthedgedrequestsparallel-gw3-node_1-1 bash -c echo ' 0 30000 ' > /etc/clickhouse-server/users.d/users1.xml] run container_id:roottesthedgedrequestsparallel-gw3-node_2-1 detach:False nothrow:False cmd: ['bash', '-c', "echo '\n \n \n 0\n 30000\n \n \n' > /etc/clickhouse-server/users.d/users1.xml"] Command:[docker exec roottesthedgedrequestsparallel-gw3-node_2-1 bash -c echo ' 0 30000 ' > /etc/clickhouse-server/users.d/users1.xml] Failed connecting to Zookeeper within the connection retry policy. Zookeeper session closed, state: CLOSED All instances of ZooKeeper started: ('zoo1', 'zoo2', 'zoo3') ('Trying to create ClickHouse instance by command %s', 'docker compose --env-file /ClickHouse/tests/integration/test_fetch_partition_from_auxiliary_zookeeper/_instances-0-gw9/.env --project-name roottestfetchpartitionfromauxiliaryzookeeper-gw9 --file /ClickHouse/tests/integration/test_fetch_partition_from_auxiliary_zookeeper/_instances-0-gw9/node/docker-compose.yml --file /ClickHouse/tests/integration/helpers/../../../tests/integration/compose/docker_compose_keeper.yml up -d --no-recreate') Command:[docker compose --env-file /ClickHouse/tests/integration/test_fetch_partition_from_auxiliary_zookeeper/_instances-0-gw9/.env --project-name roottestfetchpartitionfromauxiliaryzookeeper-gw9 --file /ClickHouse/tests/integration/test_fetch_partition_from_auxiliary_zookeeper/_instances-0-gw9/node/docker-compose.yml --file /ClickHouse/tests/integration/helpers/../../../tests/integration/compose/docker_compose_keeper.yml up -d --no-recreate] run container_id:roottesthedgedrequestsparallel-gw3-node_3-1 detach:False nothrow:False cmd: ['bash', '-c', "echo '\n \n \n 0\n 0\n \n \n' > /etc/clickhouse-server/users.d/users1.xml"] Command:[docker exec roottesthedgedrequestsparallel-gw3-node_3-1 bash -c echo ' 0 0 ' > /etc/clickhouse-server/users.d/users1.xml] Stderr: Container roottestfilecluster-gw2-s0_0_0-1 Stopping Stderr: Container roottestfilecluster-gw2-s0_0_1-1 Stopping Stderr: Container roottestfilecluster-gw2-s0_1_0-1 Stopping Stderr: Container roottestfilecluster-gw2-s0_0_0-1 Stopped Stderr: Container roottestfilecluster-gw2-s0_0_0-1 Removing Stderr: Container roottestfilecluster-gw2-s0_0_1-1 Stopped Stderr: Container roottestfilecluster-gw2-s0_0_1-1 Removing Stderr: Container roottestfilecluster-gw2-s0_1_0-1 Stopped Stderr: Container roottestfilecluster-gw2-s0_1_0-1 Removing Stderr: Container roottestfilecluster-gw2-s0_0_0-1 Removed Stderr: Container roottestfilecluster-gw2-s0_1_0-1 Removed Stderr: Container roottestfilecluster-gw2-s0_0_1-1 Removed Stderr: Container roottestfilecluster-gw2-zoo1-1 Stopping Stderr: Container roottestfilecluster-gw2-zoo2-1 Stopping Stderr: Container roottestfilecluster-gw2-zoo3-1 Stopping Stderr: Container roottestfilecluster-gw2-zoo3-1 Stopped Stderr: Container roottestfilecluster-gw2-zoo3-1 Removing Stderr: Container roottestfilecluster-gw2-zoo2-1 Stopped Stderr: Container roottestfilecluster-gw2-zoo2-1 Removing Stderr: Container roottestfilecluster-gw2-zoo1-1 Stopped Stderr: Container roottestfilecluster-gw2-zoo1-1 Removing Stderr: Container roottestfilecluster-gw2-zoo3-1 Removed Stderr: Container roottestfilecluster-gw2-zoo1-1 Removed Stderr: Container roottestfilecluster-gw2-zoo2-1 Removed Stderr: Network roottestfilecluster-gw2_default Removing Stderr: Network roottestfilecluster-gw2_default Removed Cleanup called run container_id:roottesthedgedrequestsparallel-gw3-node_4-1 detach:False nothrow:False cmd: ['bash', '-c', "echo '\n \n \n 0\n 0\n \n \n' > /etc/clickhouse-server/users.d/users1.xml"] Command:[docker exec roottesthedgedrequestsparallel-gw3-node_4-1 bash -c echo ' 0 0 ' > /etc/clickhouse-server/users.d/users1.xml] Docker networks for project roottestfilecluster-gw2 are NETWORK ID NAME DRIVER SCOPE run container_id:roottestinsertdistributedasyncsend-gw0-n1-1 detach:False nothrow:False cmd: ['bash', '-c', 'wc -c < /var/lib/clickhouse/data/default/dist/shard1_replica2/2.bin'] Command:[docker exec roottestinsertdistributedasyncsend-gw0-n1-1 bash -c wc -c < /var/lib/clickhouse/data/default/dist/shard1_replica2/2.bin] Docker containers for project roottestfilecluster-gw2 are CONTAINER ID IMAGE COMMAND CREATED STATUS PORTS NAMES Docker volumes for project roottestfilecluster-gw2 are DRIVER VOLUME NAME Command:[docker container list --all --filter name='^/roottestfilecluster-gw2-.*-1$' --format '{{.ID}}:{{.Names}}'] Executing query SELECT value FROM system.settings WHERE name='sleep_in_send_tables_status_ms' on node_1 via HTTP interface Starting new HTTP connection (1): 172.16.4.3:8123 Unstopped containers: {} No running containers for project: roottestfilecluster-gw2 Trying to prune unused networks... http://172.16.4.3:8123 "GET /?query=SELECT+value+FROM+system.settings+WHERE+name%3D%27sleep_in_send_tables_status_ms%27 HTTP/1.1" 200 None Executing query SELECT value FROM system.settings WHERE name='sleep_in_send_data_ms' on node_1 via HTTP interface Starting new HTTP connection (1): 172.16.4.3:8123 Stdout:1054688 run container_id:roottestinsertdistributedasyncsend-gw0-n1-1 detach:False nothrow:False cmd: ['bash', '-c', 'mv /var/lib/clickhouse/data/default/dist/shard1_replica2/2.bin /tmp/bin && head -c 1054628 /tmp/bin > /var/lib/clickhouse/data/default/dist/shard1_replica2/2.bin && head -c 60 /dev/zero >> /var/lib/clickhouse/data/default/dist/shard1_replica2/2.bin'] Command:[docker exec roottestinsertdistributedasyncsend-gw0-n1-1 bash -c mv /var/lib/clickhouse/data/default/dist/shard1_replica2/2.bin /tmp/bin && head -c 1054628 /tmp/bin > /var/lib/clickhouse/data/default/dist/shard1_replica2/2.bin && head -c 60 /dev/zero >> /var/lib/clickhouse/data/default/dist/shard1_replica2/2.bin] Executing query BACKUP TABLE encrypted_test TO S3('http://minio1:9001/root/backups/backup6', 'minio', 'minio123') SETTINGS decrypt_files_from_encrypted_disks=0 on node Trying to prune unused images... Command:[docker image prune -f] http://172.16.4.3:8123 "GET /?query=SELECT+value+FROM+system.settings+WHERE+name%3D%27sleep_in_send_data_ms%27 HTTP/1.1" 200 None Executing query SELECT value FROM system.settings WHERE name='sleep_in_send_tables_status_ms' on node_2 via HTTP interface Starting new HTTP connection (1): 172.16.4.5:8123 Stdout:Total reclaimed space: 0B Images pruned Trying to prune unused volumes... Command:[docker volume ls | wc -l] http://172.16.4.5:8123 "GET /?query=SELECT+value+FROM+system.settings+WHERE+name%3D%27sleep_in_send_tables_status_ms%27 HTTP/1.1" 200 None Executing query SELECT value FROM system.settings WHERE name='sleep_in_send_data_ms' on node_2 via HTTP interface Executing query SYSTEM FLUSH DISTRIBUTED dist on n1 Starting new HTTP connection (1): 172.16.4.5:8123 Stdout:6 Command:[docker volume prune -f] Stdout:Total reclaimed space: 0B http://172.16.4.5:8123 "GET /?query=SELECT+value+FROM+system.settings+WHERE+name%3D%27sleep_in_send_data_ms%27 HTTP/1.1" 200 None Volumes pruned: 6 Executing query SELECT value FROM system.settings WHERE name='sleep_in_send_tables_status_ms' on node_2 via HTTP interface Starting new HTTP connection (1): 172.16.4.5:8123 http://172.16.4.5:8123 "GET /?query=SELECT+value+FROM+system.settings+WHERE+name%3D%27sleep_in_send_tables_status_ms%27 HTTP/1.1" 200 None Executing query SELECT value FROM system.settings WHERE name='sleep_in_send_data_ms' on node_2 via HTTP interface Starting new HTTP connection (1): 172.16.4.5:8123 http://172.16.4.5:8123 "GET /?query=SELECT+value+FROM+system.settings+WHERE+name%3D%27sleep_in_send_data_ms%27 HTTP/1.1" 200 None Connecting to 172.16.7.4(172.16.7.4):2181, use_ssl: False Sending request(xid=None): Connect(protocol_version=0, last_zxid_seen=0, time_out=30000, session_id=0, passwd=b'\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00', read_only=None) Zookeeper connection established, state: CONNECTED Sending request(xid=1): GetChildren(path='/', watcher=None) Received response(xid=1): ['keeper'] Sending request(xid=2): Close() Connection dropped: socket connection broken Transition to CONNECTING Zookeeper connection lost Stderr: Container roottestfetchpartitionfromauxiliaryzookeeper-gw9-zoo3-1 Running Stderr: Container roottestfetchpartitionfromauxiliaryzookeeper-gw9-zoo1-1 Running Stderr: Container roottestfetchpartitionfromauxiliaryzookeeper-gw9-zoo2-1 Running Stderr: Container roottestfetchpartitionfromauxiliaryzookeeper-gw9-node-1 Creating Stderr: Container roottestfetchpartitionfromauxiliaryzookeeper-gw9-node-1 Created Stderr: Container roottestfetchpartitionfromauxiliaryzookeeper-gw9-node-1 Starting Stderr: Container roottestfetchpartitionfromauxiliaryzookeeper-gw9-node-1 Started ClickHouse instance created get_instance_ip instance_name=node http://localhost:None "GET /v1.46/containers/roottestfetchpartitionfromauxiliaryzookeeper-gw9-node-1/json HTTP/1.1" 200 None get_instance_ip instance_name=node http://localhost:None "GET /v1.46/containers/roottestfetchpartitionfromauxiliaryzookeeper-gw9-node-1/json HTTP/1.1" 200 None Waiting for ClickHouse start in node, ip: 172.16.9.5... http://localhost:None "GET /v1.46/containers/roottestfetchpartitionfromauxiliaryzookeeper-gw9-node-1/json HTTP/1.1" 200 None http://localhost:None "GET /v1.46/containers/1c4894eaacb790cc228d0b5d1e07ce0200479f04fa1b6e65c6b06ad4420a489c/json HTTP/1.1" 200 None Executing query DROP TABLE encrypted_test SYNC on node Executing query SELECT value FROM system.settings WHERE name='sleep_in_send_tables_status_ms' on node_2 via HTTP interface Starting new HTTP connection (1): 172.16.4.5:8123 http://172.16.4.5:8123 "GET /?query=SELECT+value+FROM+system.settings+WHERE+name%3D%27sleep_in_send_tables_status_ms%27 HTTP/1.1" 200 None Failed connecting to Zookeeper within the connection retry policy. Zookeeper session closed, state: CLOSED get_instance_ip instance_name=zoo2 Executing query SELECT value FROM system.settings WHERE name='sleep_in_send_data_ms' on node_2 via HTTP interface http://localhost:None "GET /v1.46/containers/roottestjbodha-gw4-zoo2-1/json HTTP/1.1" 200 None Starting new HTTP connection (1): 172.16.4.5:8123 get_kazoo_client: zoo2, ip:172.16.7.2, port:2181, use_ssl:False Connecting to 172.16.7.2(172.16.7.2):2181, use_ssl: False Sending request(xid=None): Connect(protocol_version=0, last_zxid_seen=0, time_out=30000, session_id=0, passwd=b'\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00', read_only=None) Zookeeper connection established, state: CONNECTED Sending request(xid=1): GetChildren(path='/', watcher=None) Received response(xid=1): ['keeper'] Sending request(xid=2): Close() Connection dropped: socket connection broken Transition to CONNECTING Zookeeper connection lost http://172.16.4.5:8123 "GET /?query=SELECT+value+FROM+system.settings+WHERE+name%3D%27sleep_in_send_data_ms%27 HTTP/1.1" 200 None http://localhost:None "GET /v1.46/containers/1c4894eaacb790cc228d0b5d1e07ce0200479f04fa1b6e65c6b06ad4420a489c/json HTTP/1.1" 200 None Executing query SYSTEM FLUSH DISTRIBUTED dist on n1 Failed connecting to Zookeeper within the connection retry policy. Zookeeper session closed, state: CLOSED get_instance_ip instance_name=zoo3 http://localhost:None "GET /v1.46/containers/roottestjbodha-gw4-zoo3-1/json HTTP/1.1" 200 None get_kazoo_client: zoo3, ip:172.16.7.3, port:2181, use_ssl:False Connecting to 172.16.7.3(172.16.7.3):2181, use_ssl: False Sending request(xid=None): Connect(protocol_version=0, last_zxid_seen=0, time_out=30000, session_id=0, passwd=b'\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00', read_only=None) Zookeeper connection established, state: CONNECTED Sending request(xid=1): GetChildren(path='/', watcher=None) Received response(xid=1): ['keeper'] Sending request(xid=2): Close() Connection dropped: socket connection broken Transition to CONNECTING Zookeeper connection lost Executing query SELECT value FROM system.settings WHERE name='sleep_in_send_tables_status_ms' on node_2 via HTTP interface Starting new HTTP connection (1): 172.16.4.5:8123 http://localhost:None "GET /v1.46/containers/1c4894eaacb790cc228d0b5d1e07ce0200479f04fa1b6e65c6b06ad4420a489c/json HTTP/1.1" 200 None http://172.16.4.5:8123 "GET /?query=SELECT+value+FROM+system.settings+WHERE+name%3D%27sleep_in_send_tables_status_ms%27 HTTP/1.1" 200 None Executing query SELECT value FROM system.settings WHERE name='sleep_in_send_data_ms' on node_2 via HTTP interface Starting new HTTP connection (1): 172.16.4.5:8123 http://172.16.4.5:8123 "GET /?query=SELECT+value+FROM+system.settings+WHERE+name%3D%27sleep_in_send_data_ms%27 HTTP/1.1" 200 None Connecting to 172.16.10.3(172.16.10.3):2181, use_ssl: False Sending request(xid=None): Connect(protocol_version=0, last_zxid_seen=0, time_out=30000, session_id=0, passwd=b'\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00', read_only=None) Failed connecting to Zookeeper within the connection retry policy. Zookeeper session closed, state: CLOSED All instances of ZooKeeper started: ('zoo1', 'zoo2', 'zoo3') ('Trying to create ClickHouse instance by command %s', 'docker compose --env-file /ClickHouse/tests/integration/test_jbod_ha/_instances-0-gw4/.env --project-name roottestjbodha-gw4 --file /ClickHouse/tests/integration/test_jbod_ha/_instances-0-gw4/node1/docker-compose.yml --file /ClickHouse/tests/integration/helpers/../../../tests/integration/compose/docker_compose_keeper.yml --file /ClickHouse/tests/integration/test_jbod_ha/_instances-0-gw4/node2/docker-compose.yml up -d --no-recreate') Command:[docker compose --env-file /ClickHouse/tests/integration/test_jbod_ha/_instances-0-gw4/.env --project-name roottestjbodha-gw4 --file /ClickHouse/tests/integration/test_jbod_ha/_instances-0-gw4/node1/docker-compose.yml --file /ClickHouse/tests/integration/helpers/../../../tests/integration/compose/docker_compose_keeper.yml --file /ClickHouse/tests/integration/test_jbod_ha/_instances-0-gw4/node2/docker-compose.yml up -d --no-recreate] Executing query CREATE TABLE encrypted_test ( id Int64, data String ) ENGINE=MergeTree() ORDER BY id SETTINGS storage_policy='s3_encrypted_default_path' on node Zookeeper connection established, state: CONNECTED Sending request(xid=1): GetChildren(path='/', watcher=None) Received response(xid=1): ['zookeeper'] Sending request(xid=2): Close() Closing connection to 172.16.10.3:2181 Zookeeper session closed, state: CLOSED get_instance_ip instance_name=zoo2 http://localhost:None "GET /v1.46/containers/roottestdropreplicawithauxiliaryzookeepers-gw8-zoo2-1/json HTTP/1.1" 200 None get_kazoo_client: zoo2, ip:172.16.10.4, port:2181, use_ssl:False Connecting to 172.16.10.4(172.16.10.4):2181, use_ssl: False Sending request(xid=None): Connect(protocol_version=0, last_zxid_seen=0, time_out=30000, session_id=0, passwd=b'\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00', read_only=None) http://localhost:None "GET /v1.46/containers/1c4894eaacb790cc228d0b5d1e07ce0200479f04fa1b6e65c6b06ad4420a489c/json HTTP/1.1" 200 None Zookeeper connection established, state: CONNECTED Sending request(xid=1): GetChildren(path='/', watcher=None) Received response(xid=1): ['zookeeper'] Sending request(xid=2): Close() Closing connection to 172.16.10.4:2181 Zookeeper session closed, state: CLOSED get_instance_ip instance_name=zoo3 http://localhost:None "GET /v1.46/containers/roottestdropreplicawithauxiliaryzookeepers-gw8-zoo3-1/json HTTP/1.1" 200 None get_kazoo_client: zoo3, ip:172.16.10.2, port:2181, use_ssl:False Connecting to 172.16.10.2(172.16.10.2):2181, use_ssl: False Sending request(xid=None): Connect(protocol_version=0, last_zxid_seen=0, time_out=30000, session_id=0, passwd=b'\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00', read_only=None) Executing query SELECT value FROM system.settings WHERE name='sleep_in_send_tables_status_ms' on node_2 via HTTP interface Starting new HTTP connection (1): 172.16.4.5:8123 Zookeeper connection established, state: CONNECTED Sending request(xid=1): GetChildren(path='/', watcher=None) http://172.16.4.5:8123 "GET /?query=SELECT+value+FROM+system.settings+WHERE+name%3D%27sleep_in_send_tables_status_ms%27 HTTP/1.1" 200 None Received response(xid=1): ['zookeeper'] Sending request(xid=2): Close() Executing query SELECT value FROM system.settings WHERE name='sleep_in_send_data_ms' on node_2 via HTTP interface Starting new HTTP connection (1): 172.16.4.5:8123 Closing connection to 172.16.10.2:2181 Zookeeper session closed, state: CLOSED All instances of ZooKeeper started: ('zoo1', 'zoo2', 'zoo3') ('Trying to create ClickHouse instance by command %s', 'docker compose --env-file /ClickHouse/tests/integration/test_drop_replica_with_auxiliary_zookeepers/_instances-0-gw8/.env --project-name roottestdropreplicawithauxiliaryzookeepers-gw8 --file /ClickHouse/tests/integration/test_drop_replica_with_auxiliary_zookeepers/_instances-0-gw8/node1/docker-compose.yml --file /ClickHouse/tests/integration/helpers/../../../tests/integration/compose/docker_compose_zookeeper.yml --file /ClickHouse/tests/integration/test_drop_replica_with_auxiliary_zookeepers/_instances-0-gw8/node2/docker-compose.yml up -d --no-recreate') Command:[docker compose --env-file /ClickHouse/tests/integration/test_drop_replica_with_auxiliary_zookeepers/_instances-0-gw8/.env --project-name roottestdropreplicawithauxiliaryzookeepers-gw8 --file /ClickHouse/tests/integration/test_drop_replica_with_auxiliary_zookeepers/_instances-0-gw8/node1/docker-compose.yml --file /ClickHouse/tests/integration/helpers/../../../tests/integration/compose/docker_compose_zookeeper.yml --file /ClickHouse/tests/integration/test_drop_replica_with_auxiliary_zookeepers/_instances-0-gw8/node2/docker-compose.yml up -d --no-recreate] http://172.16.4.5:8123 "GET /?query=SELECT+value+FROM+system.settings+WHERE+name%3D%27sleep_in_send_data_ms%27 HTTP/1.1" 200 None Executing query SELECT value FROM system.settings WHERE name='sleep_in_send_tables_status_ms' on node_3 via HTTP interface Starting new HTTP connection (1): 172.16.4.4:8123 http://172.16.4.4:8123 "GET /?query=SELECT+value+FROM+system.settings+WHERE+name%3D%27sleep_in_send_tables_status_ms%27 HTTP/1.1" 200 None Executing query SELECT value FROM system.settings WHERE name='sleep_in_send_data_ms' on node_3 via HTTP interface Starting new HTTP connection (1): 172.16.4.4:8123 http://localhost:None "GET /v1.46/containers/1c4894eaacb790cc228d0b5d1e07ce0200479f04fa1b6e65c6b06ad4420a489c/json HTTP/1.1" 200 None run container_id:roottestinsertdistributedasyncsend-gw0-n1-1 detach:False nothrow:False cmd: ['bash', '-c', 'ls /var/lib/clickhouse/data/default/dist/shard1_replica2/broken/2.bin'] Command:[docker exec roottestinsertdistributedasyncsend-gw0-n1-1 bash -c ls /var/lib/clickhouse/data/default/dist/shard1_replica2/broken/2.bin] http://172.16.4.4:8123 "GET /?query=SELECT+value+FROM+system.settings+WHERE+name%3D%27sleep_in_send_data_ms%27 HTTP/1.1" 200 None Executing query SELECT value FROM system.settings WHERE name='sleep_in_send_tables_status_ms' on node_4 via HTTP interface Starting new HTTP connection (1): 172.16.4.2:8123 http://172.16.4.2:8123 "GET /?query=SELECT+value+FROM+system.settings+WHERE+name%3D%27sleep_in_send_tables_status_ms%27 HTTP/1.1" 200 None Executing query SELECT value FROM system.settings WHERE name='sleep_in_send_data_ms' on node_4 via HTTP interface Starting new HTTP connection (1): 172.16.4.2:8123 http://172.16.4.2:8123 "GET /?query=SELECT+value+FROM+system.settings+WHERE+name%3D%27sleep_in_send_data_ms%27 HTTP/1.1" 200 None run container_id:roottesthedgedrequestsparallel-gw3-node-1 detach:False nothrow:True cmd: ['bash', '-c', 'ps -C clickhouse'] Command:[docker exec -u root roottesthedgedrequestsparallel-gw3-node-1 bash -c ps -C clickhouse] Stdout:/var/lib/clickhouse/data/default/dist/shard1_replica2/broken/2.bin Executing query SELECT count() FROM data on n1 Stdout: PID TTY TIME CMD Stdout: 1511 ? 00:00:02 clickhouse run container_id:roottesthedgedrequestsparallel-gw3-node-1 detach:False nothrow:False cmd: ['bash', '-c', 'pkill clickhouse'] Command:[docker exec -u root roottesthedgedrequestsparallel-gw3-node-1 bash -c pkill clickhouse] http://localhost:None "GET /v1.46/containers/1c4894eaacb790cc228d0b5d1e07ce0200479f04fa1b6e65c6b06ad4420a489c/json HTTP/1.1" 200 None run container_id:roottesthedgedrequestsparallel-gw3-node-1 detach:False nothrow:False cmd: ['bash', '-c', "ps ax | grep 'clickhouse' | grep -v 'grep' | grep -v 'coproc' | grep -v 'bash -c' | awk '{print $1}'"] Command:[docker exec roottesthedgedrequestsparallel-gw3-node-1 bash -c ps ax | grep 'clickhouse' | grep -v 'grep' | grep -v 'coproc' | grep -v 'bash -c' | awk '{print $1}'] Stderr: Container roottestgraphitemergetree-gw5-instance-1 Stopping Stderr: Container roottestgraphitemergetree-gw5-instance-1 Stopped Command:[bash -c [ -f /ClickHouse/tests/integration/test_graphite_merge_tree/_instances-0-gw5/instance/logs/stderr.log ] && zgrep -aH "==================" /ClickHouse/tests/integration/test_graphite_merge_tree/_instances-0-gw5/instance/logs/stderr.log* | ( [ -z "" ] && cat || grep -v "$" ) || true] Command:[docker compose --env-file /ClickHouse/tests/integration/test_graphite_merge_tree/_instances-0-gw5/.env --project-name roottestgraphitemergetree-gw5 --file /ClickHouse/tests/integration/test_graphite_merge_tree/_instances-0-gw5/instance/docker-compose.yml down --volumes] Stdout:1511 http://localhost:None "GET /v1.46/containers/1c4894eaacb790cc228d0b5d1e07ce0200479f04fa1b6e65c6b06ad4420a489c/json HTTP/1.1" 200 None Stderr: Container roottestinputformatparallelparsingmemorytracking-gw7-instance-1 Stopping Stderr: Container roottestinputformatparallelparsingmemorytracking-gw7-instance-1 Stopped Command:[bash -c [ -f /ClickHouse/tests/integration/test_input_format_parallel_parsing_memory_tracking/_instances-0-gw7/instance/logs/stderr.log ] && zgrep -aH "==================" /ClickHouse/tests/integration/test_input_format_parallel_parsing_memory_tracking/_instances-0-gw7/instance/logs/stderr.log* | ( [ -z "" ] && cat || grep -v "$" ) || true] Command:[docker compose --env-file /ClickHouse/tests/integration/test_input_format_parallel_parsing_memory_tracking/_instances-0-gw7/.env --project-name roottestinputformatparallelparsingmemorytracking-gw7 --file /ClickHouse/tests/integration/test_input_format_parallel_parsing_memory_tracking/_instances-0-gw7/instance/docker-compose.yml down --volumes] Executing query RESTORE TABLE encrypted_test FROM S3('http://minio1:9001/root/backups/backup6', 'minio', 'minio123') SETTINGS allow_different_table_def=1 on node http://localhost:None "GET /v1.46/containers/1c4894eaacb790cc228d0b5d1e07ce0200479f04fa1b6e65c6b06ad4420a489c/json HTTP/1.1" 200 None Executing query SELECT count() FROM data on n2 Stderr: Container roottesthttpandreadonly-gw6-instance-1 Stopping Stderr: Container roottesthttpandreadonly-gw6-instance-1 Stopped Command:[bash -c [ -f /ClickHouse/tests/integration/test_http_and_readonly/_instances-0-gw6/instance/logs/stderr.log ] && zgrep -aH "==================" /ClickHouse/tests/integration/test_http_and_readonly/_instances-0-gw6/instance/logs/stderr.log* | ( [ -z "" ] && cat || grep -v "$" ) || true] Command:[docker compose --env-file /ClickHouse/tests/integration/test_http_and_readonly/_instances-0-gw6/.env --project-name roottesthttpandreadonly-gw6 --file /ClickHouse/tests/integration/test_http_and_readonly/_instances-0-gw6/instance/docker-compose.yml down --volumes] http://localhost:None "GET /v1.46/containers/1c4894eaacb790cc228d0b5d1e07ce0200479f04fa1b6e65c6b06ad4420a489c/json HTTP/1.1" 200 None http://localhost:None "GET /v1.46/containers/1c4894eaacb790cc228d0b5d1e07ce0200479f04fa1b6e65c6b06ad4420a489c/json HTTP/1.1" 200 None http://localhost:None "GET /v1.46/containers/1c4894eaacb790cc228d0b5d1e07ce0200479f04fa1b6e65c6b06ad4420a489c/json HTTP/1.1" 200 None [gw0] PASSED test_insert_distributed_async_send/test.py::test_insert_distributed_async_send_corrupted_small[1] test_insert_distributed_async_send/test.py::test_insert_distributed_async_send_different_header[0] Executing query DROP TABLE IF EXISTS data on n1 Executing query SELECT * FROM encrypted_test ORDER BY id FORMAT Values on node http://localhost:None "GET /v1.46/containers/1c4894eaacb790cc228d0b5d1e07ce0200479f04fa1b6e65c6b06ad4420a489c/json HTTP/1.1" 200 None http://localhost:None "GET /v1.46/containers/1c4894eaacb790cc228d0b5d1e07ce0200479f04fa1b6e65c6b06ad4420a489c/json HTTP/1.1" 200 None Stderr: Container roottestgraphitemergetree-gw5-instance-1 Stopping Stderr: Container roottestgraphitemergetree-gw5-instance-1 Stopped Stderr: Container roottestgraphitemergetree-gw5-instance-1 Removing Stderr: Container roottestgraphitemergetree-gw5-instance-1 Removed Stderr: Network roottestgraphitemergetree-gw5_default Removing Stderr: Network roottestgraphitemergetree-gw5_default Removed Cleanup called Docker networks for project roottestgraphitemergetree-gw5 are NETWORK ID NAME DRIVER SCOPE Docker containers for project roottestgraphitemergetree-gw5 are CONTAINER ID IMAGE COMMAND CREATED STATUS PORTS NAMES http://localhost:None "GET /v1.46/containers/1c4894eaacb790cc228d0b5d1e07ce0200479f04fa1b6e65c6b06ad4420a489c/json HTTP/1.1" 200 None ClickHouse node started Executing query CREATE TABLE IF NOT EXISTS simple (date Date, id UInt32) ENGINE = ReplicatedMergeTree('/clickhouse/tables/0/simple', 'node') ORDER BY tuple() PARTITION BY date; on node Docker volumes for project roottestgraphitemergetree-gw5 are DRIVER VOLUME NAME Command:[docker container list --all --filter name='^/roottestgraphitemergetree-gw5-.*-1$' --format '{{.ID}}:{{.Names}}'] Unstopped containers: {} No running containers for project: roottestgraphitemergetree-gw5 Trying to prune unused networks... Trying to prune unused images... Command:[docker image prune -f] Stdout:Total reclaimed space: 0B Images pruned Trying to prune unused volumes... Command:[docker volume ls | wc -l] Stdout:6 Command:[docker volume prune -f] Stdout:Total reclaimed space: 0B Volumes pruned: 6 Stderr: Container roottestjbodha-gw4-zoo3-1 Running Stderr: Container roottestjbodha-gw4-zoo1-1 Running Stderr: Container roottestjbodha-gw4-zoo2-1 Running Stderr: Container roottestjbodha-gw4-node1-1 Creating Stderr: Container roottestjbodha-gw4-node2-1 Creating Stderr: Container roottestjbodha-gw4-node1-1 Created Stderr: Container roottestjbodha-gw4-node2-1 Created Stderr: Container roottestjbodha-gw4-node1-1 Starting Stderr: Container roottestjbodha-gw4-node2-1 Starting Stderr: Container roottestjbodha-gw4-node2-1 Started Stderr: Container roottestjbodha-gw4-node1-1 Started ClickHouse instance created get_instance_ip instance_name=node1 http://localhost:None "GET /v1.46/containers/roottestjbodha-gw4-node1-1/json HTTP/1.1" 200 None get_instance_ip instance_name=node1 http://localhost:None "GET /v1.46/containers/roottestjbodha-gw4-node1-1/json HTTP/1.1" 200 None Waiting for ClickHouse start in node1, ip: 172.16.7.6... Stderr: Container roottestdropreplicawithauxiliaryzookeepers-gw8-zoo1-1 Running Stderr: Container roottestdropreplicawithauxiliaryzookeepers-gw8-zoo2-1 Running http://localhost:None "GET /v1.46/containers/roottestjbodha-gw4-node1-1/json HTTP/1.1" 200 None Stderr: Container roottestdropreplicawithauxiliaryzookeepers-gw8-zoo3-1 Running Stderr: Container roottestdropreplicawithauxiliaryzookeepers-gw8-node2-1 Creating Stderr: Container roottestdropreplicawithauxiliaryzookeepers-gw8-node1-1 Creating Stderr: Container roottestdropreplicawithauxiliaryzookeepers-gw8-node1-1 Created Stderr: Container roottestdropreplicawithauxiliaryzookeepers-gw8-node2-1 Created Stderr: Container roottestdropreplicawithauxiliaryzookeepers-gw8-node1-1 Starting Stderr: Container roottestdropreplicawithauxiliaryzookeepers-gw8-node2-1 Starting Stderr: Container roottestdropreplicawithauxiliaryzookeepers-gw8-node1-1 Started Stderr: Container roottestdropreplicawithauxiliaryzookeepers-gw8-node2-1 Started ClickHouse instance created get_instance_ip instance_name=node1 http://localhost:None "GET /v1.46/containers/a0e4738336b88b9a5ac8171924159b671f44ac10f467c7ee66a78b4e48dec3e7/json HTTP/1.1" 200 None http://localhost:None "GET /v1.46/containers/roottestdropreplicawithauxiliaryzookeepers-gw8-node1-1/json HTTP/1.1" 200 None get_instance_ip instance_name=node1 http://localhost:None "GET /v1.46/containers/roottestdropreplicawithauxiliaryzookeepers-gw8-node1-1/json HTTP/1.1" 200 None run container_id:roottesthedgedrequestsparallel-gw3-node-1 detach:False nothrow:False cmd: ['bash', '-c', "ps ax | grep 'clickhouse' | grep -v 'grep' | grep -v 'coproc' | grep -v 'bash -c' | awk '{print $1}'"] Waiting for ClickHouse start in node1, ip: 172.16.10.6... Command:[docker exec roottesthedgedrequestsparallel-gw3-node-1 bash -c ps ax | grep 'clickhouse' | grep -v 'grep' | grep -v 'coproc' | grep -v 'bash -c' | awk '{print $1}'] http://localhost:None "GET /v1.46/containers/roottestdropreplicawithauxiliaryzookeepers-gw8-node1-1/json HTTP/1.1" 200 None http://localhost:None "GET /v1.46/containers/064e1c861ee24ad6ee0f83429a5fa554c2fe73102ae1a929a2f88c317720c1dc/json HTTP/1.1" 200 None Stdout:1511 Executing query DROP TABLE IF EXISTS dist on n1 Executing query DROP TABLE IF EXISTS encrypted_test SYNC on node [gw1] PASSED test_encrypted_disk/test.py::test_backup_restore[S3-encrypted_policy-s3_encrypted_default_path-False] http://localhost:None "GET /v1.46/containers/a0e4738336b88b9a5ac8171924159b671f44ac10f467c7ee66a78b4e48dec3e7/json HTTP/1.1" 200 None Executing query INSERT INTO simple VALUES ('2020-08-28', 1) on node http://localhost:None "GET /v1.46/containers/064e1c861ee24ad6ee0f83429a5fa554c2fe73102ae1a929a2f88c317720c1dc/json HTTP/1.1" 200 None http://localhost:None "GET /v1.46/containers/a0e4738336b88b9a5ac8171924159b671f44ac10f467c7ee66a78b4e48dec3e7/json HTTP/1.1" 200 None http://localhost:None "GET /v1.46/containers/064e1c861ee24ad6ee0f83429a5fa554c2fe73102ae1a929a2f88c317720c1dc/json HTTP/1.1" 200 None Executing query DROP TABLE IF EXISTS data on n2 test_encrypted_disk/test.py::test_backup_restore[S3-s3_encrypted_default_path-encrypted_policy-False] Executing query CREATE TABLE encrypted_test ( id Int64, data String ) ENGINE=MergeTree() ORDER BY id SETTINGS storage_policy='s3_encrypted_default_path' on node http://localhost:None "GET /v1.46/containers/a0e4738336b88b9a5ac8171924159b671f44ac10f467c7ee66a78b4e48dec3e7/json HTTP/1.1" 200 None http://localhost:None "GET /v1.46/containers/064e1c861ee24ad6ee0f83429a5fa554c2fe73102ae1a929a2f88c317720c1dc/json HTTP/1.1" 200 None Executing query CREATE TABLE IF NOT EXISTS simple2 (date Date, id UInt32) ENGINE = ReplicatedMergeTree('/clickhouse/tables/1/simple', 'node') ORDER BY tuple() PARTITION BY date; on node http://localhost:None "GET /v1.46/containers/a0e4738336b88b9a5ac8171924159b671f44ac10f467c7ee66a78b4e48dec3e7/json HTTP/1.1" 200 None http://localhost:None "GET /v1.46/containers/064e1c861ee24ad6ee0f83429a5fa554c2fe73102ae1a929a2f88c317720c1dc/json HTTP/1.1" 200 None Executing query DROP TABLE IF EXISTS dist on n2 Stderr: Container roottestinputformatparallelparsingmemorytracking-gw7-instance-1 Stopping Stderr: Container roottestinputformatparallelparsingmemorytracking-gw7-instance-1 Stopped Stderr: Container roottestinputformatparallelparsingmemorytracking-gw7-instance-1 Removing Stderr: Container roottestinputformatparallelparsingmemorytracking-gw7-instance-1 Removed Stderr: Network roottestinputformatparallelparsingmemorytracking-gw7_default Removing Stderr: Network roottestinputformatparallelparsingmemorytracking-gw7_default Removed Cleanup called Docker networks for project roottestinputformatparallelparsingmemorytracking-gw7 are NETWORK ID NAME DRIVER SCOPE Docker containers for project roottestinputformatparallelparsingmemorytracking-gw7 are CONTAINER ID IMAGE COMMAND CREATED STATUS PORTS NAMES Docker volumes for project roottestinputformatparallelparsingmemorytracking-gw7 are DRIVER VOLUME NAME Command:[docker container list --all --filter name='^/roottestinputformatparallelparsingmemorytracking-gw7-.*-1$' --format '{{.ID}}:{{.Names}}'] http://localhost:None "GET /v1.46/containers/a0e4738336b88b9a5ac8171924159b671f44ac10f467c7ee66a78b4e48dec3e7/json HTTP/1.1" 200 None http://localhost:None "GET /v1.46/containers/064e1c861ee24ad6ee0f83429a5fa554c2fe73102ae1a929a2f88c317720c1dc/json HTTP/1.1" 200 None Executing query INSERT INTO encrypted_test VALUES (0,'data'),(1,'data') on node Unstopped containers: {} No running containers for project: roottestinputformatparallelparsingmemorytracking-gw7 Trying to prune unused networks... Trying to prune unused images... Command:[docker image prune -f] Stdout:Total reclaimed space: 0B Images pruned Trying to prune unused volumes... Command:[docker volume ls | wc -l] Stdout:6 Command:[docker volume prune -f] Executing query ALTER TABLE simple2 FETCH PART '20200828_0_0_0' FROM 'zookeeper2:/clickhouse/tables/0/simple'; on node Stdout:Total reclaimed space: 0B Volumes pruned: 6 test_keeper_incorrect_config/test.py::test_invalid_configs Running tests in /ClickHouse/tests/integration/test_keeper_incorrect_config/test.py Cluster start called. is_up=False http://localhost:None "GET /v1.46/containers/a0e4738336b88b9a5ac8171924159b671f44ac10f467c7ee66a78b4e48dec3e7/json HTTP/1.1" 200 None http://localhost:None "GET /v1.46/containers/064e1c861ee24ad6ee0f83429a5fa554c2fe73102ae1a929a2f88c317720c1dc/json HTTP/1.1" 200 None Docker networks for project roottestkeeperincorrectconfig-gw7 are NETWORK ID NAME DRIVER SCOPE Docker containers for project roottestkeeperincorrectconfig-gw7 are CONTAINER ID IMAGE COMMAND CREATED STATUS PORTS NAMES Docker volumes for project roottestkeeperincorrectconfig-gw7 are DRIVER VOLUME NAME Cleanup called Executing query DROP TABLE IF EXISTS data on n3 Docker networks for project roottestkeeperincorrectconfig-gw7 are NETWORK ID NAME DRIVER SCOPE Docker containers for project roottestkeeperincorrectconfig-gw7 are CONTAINER ID IMAGE COMMAND CREATED STATUS PORTS NAMES Docker volumes for project roottestkeeperincorrectconfig-gw7 are DRIVER VOLUME NAME Command:[docker container list --all --filter name='^/roottestkeeperincorrectconfig-gw7-.*-1$' --format '{{.ID}}:{{.Names}}'] http://localhost:None "GET /v1.46/containers/a0e4738336b88b9a5ac8171924159b671f44ac10f467c7ee66a78b4e48dec3e7/json HTTP/1.1" 200 None http://localhost:None "GET /v1.46/containers/064e1c861ee24ad6ee0f83429a5fa554c2fe73102ae1a929a2f88c317720c1dc/json HTTP/1.1" 200 None Unstopped containers: {} No running containers for project: roottestkeeperincorrectconfig-gw7 Trying to prune unused networks... Trying to prune unused images... Command:[docker image prune -f] Executing query SELECT * FROM encrypted_test ORDER BY id FORMAT Values on node Stdout:Total reclaimed space: 0B Images pruned Trying to prune unused volumes... Command:[docker volume ls | wc -l] Executing query ALTER TABLE simple2 ATTACH PART '20200828_0_0_0'; on node Stdout:6 Command:[docker volume prune -f] http://localhost:None "GET /v1.46/containers/a0e4738336b88b9a5ac8171924159b671f44ac10f467c7ee66a78b4e48dec3e7/json HTTP/1.1" 200 None http://localhost:None "GET /v1.46/containers/064e1c861ee24ad6ee0f83429a5fa554c2fe73102ae1a929a2f88c317720c1dc/json HTTP/1.1" 200 None Stdout:Total reclaimed space: 0B Volumes pruned: 6 Setup directory for instance: node1 Create directory for configuration generated in this helper Create directory for common tests configuration Copy common configuration from helpers Generate and write macros file Copy custom test config files ['/ClickHouse/tests/integration/test_keeper_incorrect_config/configs/enable_keeper1.xml'] to /ClickHouse/tests/integration/test_keeper_incorrect_config/_instances-0-gw7/node1/configs/config.d Setup database dir /ClickHouse/tests/integration/test_keeper_incorrect_config/_instances-0-gw7/node1/database Setup logs dir /ClickHouse/tests/integration/test_keeper_incorrect_config/_instances-0-gw7/node1/logs Entrypoint cmd: bash -c "trap 'pkill tail' INT TERM; clickhouse server --config-file=/etc/clickhouse-server/config.xml --log-file=/var/log/clickhouse-server/clickhouse-server.log --errorlog-file=/var/log/clickhouse-server/clickhouse-server.err.log --daemon -- ; coproc tail -f /dev/null; wait $$!" Env {'ASAN_OPTIONS': 'use_sigaltstack=0', 'TSAN_OPTIONS': 'use_sigaltstack=0', 'CLICKHOUSE_WATCHDOG_ENABLE': '0', 'CLICKHOUSE_NATS_TLS_SECURE': '0', 'LLVM_PROFILE_FILE': '/var/lib/clickhouse/server_%h_%p_%m.profraw'} stored in /ClickHouse/tests/integration/test_keeper_incorrect_config/_instances-0-gw7/.env Trying paths: ['/root/.docker/config.json', '/root/.dockercfg'] No config file found Trying paths: ['/root/.docker/config.json', '/root/.dockercfg'] No config file found http://localhost:None "GET /version HTTP/1.1" 200 826 Command:[docker compose --env-file /ClickHouse/tests/integration/test_keeper_incorrect_config/_instances-0-gw7/.env --project-name roottestkeeperincorrectconfig-gw7 --file /ClickHouse/tests/integration/test_keeper_incorrect_config/_instances-0-gw7/node1/docker-compose.yml pull] Executing query DROP TABLE IF EXISTS dist on n3 http://localhost:None "GET /v1.46/containers/a0e4738336b88b9a5ac8171924159b671f44ac10f467c7ee66a78b4e48dec3e7/json HTTP/1.1" 200 None http://localhost:None "GET /v1.46/containers/064e1c861ee24ad6ee0f83429a5fa554c2fe73102ae1a929a2f88c317720c1dc/json HTTP/1.1" 200 None http://localhost:None "GET /v1.46/containers/a0e4738336b88b9a5ac8171924159b671f44ac10f467c7ee66a78b4e48dec3e7/json HTTP/1.1" 200 None http://localhost:None "GET /v1.46/containers/064e1c861ee24ad6ee0f83429a5fa554c2fe73102ae1a929a2f88c317720c1dc/json HTTP/1.1" 200 None Stderr: Container roottesthttpandreadonly-gw6-instance-1 Stopping Stderr: Container roottesthttpandreadonly-gw6-instance-1 Stopped Stderr: Container roottesthttpandreadonly-gw6-instance-1 Removing Stderr: Container roottesthttpandreadonly-gw6-instance-1 Removed Stderr: Network roottesthttpandreadonly-gw6_default Removing Stderr: Network roottesthttpandreadonly-gw6_default Removed Cleanup called Executing query DROP TABLE IF EXISTS data on n4 run container_id:roottesthedgedrequestsparallel-gw3-node-1 detach:False nothrow:False cmd: ['bash', '-c', "ps ax | grep 'clickhouse' | grep -v 'grep' | grep -v 'coproc' | grep -v 'bash -c' | awk '{print $1}'"] Command:[docker exec roottesthedgedrequestsparallel-gw3-node-1 bash -c ps ax | grep 'clickhouse' | grep -v 'grep' | grep -v 'coproc' | grep -v 'bash -c' | awk '{print $1}'] Executing query BACKUP TABLE encrypted_test TO S3('http://minio1:9001/root/backups/backup7', 'minio', 'minio123') SETTINGS decrypt_files_from_encrypted_disks=0 on node Docker networks for project roottesthttpandreadonly-gw6 are NETWORK ID NAME DRIVER SCOPE Executing query ALTER TABLE simple2 FETCH PART '20200828_0_0_0' FROM 'zookeeper:/clickhouse/tables/0/simple'; on node Docker containers for project roottesthttpandreadonly-gw6 are CONTAINER ID IMAGE COMMAND CREATED STATUS PORTS NAMES Docker volumes for project roottesthttpandreadonly-gw6 are DRIVER VOLUME NAME Command:[docker container list --all --filter name='^/roottesthttpandreadonly-gw6-.*-1$' --format '{{.ID}}:{{.Names}}'] Stdout:1511 Unstopped containers: {} No running containers for project: roottesthttpandreadonly-gw6 Trying to prune unused networks... Trying to prune unused images... Command:[docker image prune -f] http://localhost:None "GET /v1.46/containers/a0e4738336b88b9a5ac8171924159b671f44ac10f467c7ee66a78b4e48dec3e7/json HTTP/1.1" 200 None http://localhost:None "GET /v1.46/containers/064e1c861ee24ad6ee0f83429a5fa554c2fe73102ae1a929a2f88c317720c1dc/json HTTP/1.1" 200 None Stdout:Total reclaimed space: 0B Images pruned Trying to prune unused volumes... Command:[docker volume ls | wc -l] Stdout:6 Command:[docker volume prune -f] Stdout:Total reclaimed space: 0B Volumes pruned: 6 test_http_native/test.py::test_http_native_returns_timezone Running tests in /ClickHouse/tests/integration/test_http_native/test.py Cluster start called. is_up=False Docker networks for project roottesthttpnative-gw6 are NETWORK ID NAME DRIVER SCOPE Executing query DROP TABLE IF EXISTS dist on n4 Docker containers for project roottesthttpnative-gw6 are CONTAINER ID IMAGE COMMAND CREATED STATUS PORTS NAMES Docker volumes for project roottesthttpnative-gw6 are DRIVER VOLUME NAME Cleanup called http://localhost:None "GET /v1.46/containers/a0e4738336b88b9a5ac8171924159b671f44ac10f467c7ee66a78b4e48dec3e7/json HTTP/1.1" 200 None http://localhost:None "GET /v1.46/containers/064e1c861ee24ad6ee0f83429a5fa554c2fe73102ae1a929a2f88c317720c1dc/json HTTP/1.1" 200 None Docker networks for project roottesthttpnative-gw6 are NETWORK ID NAME DRIVER SCOPE Docker containers for project roottesthttpnative-gw6 are CONTAINER ID IMAGE COMMAND CREATED STATUS PORTS NAMES Docker volumes for project roottesthttpnative-gw6 are DRIVER VOLUME NAME Command:[docker container list --all --filter name='^/roottesthttpnative-gw6-.*-1$' --format '{{.ID}}:{{.Names}}'] Unstopped containers: {} No running containers for project: roottesthttpnative-gw6 Trying to prune unused networks... Trying to prune unused images... Command:[docker image prune -f] Executing query DROP TABLE encrypted_test SYNC on node http://localhost:None "GET /v1.46/containers/a0e4738336b88b9a5ac8171924159b671f44ac10f467c7ee66a78b4e48dec3e7/json HTTP/1.1" 200 None http://localhost:None "GET /v1.46/containers/064e1c861ee24ad6ee0f83429a5fa554c2fe73102ae1a929a2f88c317720c1dc/json HTTP/1.1" 200 None Stdout:Total reclaimed space: 0B Images pruned Trying to prune unused volumes... Command:[docker volume ls | wc -l] Stdout:6 Command:[docker volume prune -f] Stdout:Total reclaimed space: 0B Volumes pruned: 6 Setup directory for instance: instance Create directory for configuration generated in this helper Create directory for common tests configuration Copy common configuration from helpers Generate and write macros file Copy custom test config files [] to /ClickHouse/tests/integration/test_http_native/_instances-0-gw6/instance/configs/config.d Setup database dir /ClickHouse/tests/integration/test_http_native/_instances-0-gw6/instance/database Setup logs dir /ClickHouse/tests/integration/test_http_native/_instances-0-gw6/instance/logs Entrypoint cmd: ["clickhouse", "server", "--config-file=/etc/clickhouse-server/config.xml", "--log-file=/var/log/clickhouse-server/clickhouse-server.log", "--errorlog-file=/var/log/clickhouse-server/clickhouse-server.err.log", "--"] Env {'ASAN_OPTIONS': 'use_sigaltstack=0', 'TSAN_OPTIONS': 'use_sigaltstack=0', 'CLICKHOUSE_WATCHDOG_ENABLE': '0', 'CLICKHOUSE_NATS_TLS_SECURE': '0', 'LLVM_PROFILE_FILE': '/var/lib/clickhouse/server_%h_%p_%m.profraw'} stored in /ClickHouse/tests/integration/test_http_native/_instances-0-gw6/.env Trying paths: ['/root/.docker/config.json', '/root/.dockercfg'] No config file found Trying paths: ['/root/.docker/config.json', '/root/.dockercfg'] No config file found Executing query CREATE TABLE data (key Int, value String) Engine=MergeTree() ORDER BY key on n1 Executing query SELECT id FROM simple2 where date = '2020-08-28' on node http://localhost:None "GET /version HTTP/1.1" 200 826 Command:[docker compose --env-file /ClickHouse/tests/integration/test_http_native/_instances-0-gw6/.env --project-name roottesthttpnative-gw6 --file /ClickHouse/tests/integration/test_http_native/_instances-0-gw6/instance/docker-compose.yml pull] http://localhost:None "GET /v1.46/containers/a0e4738336b88b9a5ac8171924159b671f44ac10f467c7ee66a78b4e48dec3e7/json HTTP/1.1" 200 None ClickHouse node1 started get_instance_ip instance_name=node2 http://localhost:None "GET /v1.46/containers/064e1c861ee24ad6ee0f83429a5fa554c2fe73102ae1a929a2f88c317720c1dc/json HTTP/1.1" 200 None ClickHouse node1 started get_instance_ip instance_name=node2 http://localhost:None "GET /v1.46/containers/roottestjbodha-gw4-node2-1/json HTTP/1.1" 200 None get_instance_ip instance_name=node2 http://localhost:None "GET /v1.46/containers/roottestdropreplicawithauxiliaryzookeepers-gw8-node2-1/json HTTP/1.1" 200 None http://localhost:None "GET /v1.46/containers/roottestjbodha-gw4-node2-1/json HTTP/1.1" 200 None get_instance_ip instance_name=node2 Waiting for ClickHouse start in node2, ip: 172.16.7.5... http://localhost:None "GET /v1.46/containers/roottestjbodha-gw4-node2-1/json HTTP/1.1" 200 None http://localhost:None "GET /v1.46/containers/roottestdropreplicawithauxiliaryzookeepers-gw8-node2-1/json HTTP/1.1" 200 None Waiting for ClickHouse start in node2, ip: 172.16.10.5... http://localhost:None "GET /v1.46/containers/ee2d94e4b8b75996d4911b14c21f06ed3a7635e6b69d32cfb24f4ed16645e8e9/json HTTP/1.1" 200 None ClickHouse node2 started http://localhost:None "GET /v1.46/containers/roottestdropreplicawithauxiliaryzookeepers-gw8-node2-1/json HTTP/1.1" 200 None Executing query CREATE TABLE tbl (p UInt8, d String) ENGINE = ReplicatedMergeTree('/clickhouse/tbl', '0') PARTITION BY p ORDER BY tuple() SETTINGS storage_policy = 'jbod', old_parts_lifetime = 1, cleanup_delay_period = 1, cleanup_delay_period_random_add = 2, cleanup_thread_preferred_points_per_iteration=0, max_bytes_to_merge_at_max_space_in_pool = 4096 on node1 http://localhost:None "GET /v1.46/containers/4cda0745fe63ad51c69588dea87b1812d0c1cbd2204f19fda86086fd1f91022c/json HTTP/1.1" 200 None ClickHouse node2 started Executing query DROP TABLE IF EXISTS test_auxiliary_zookeeper NO DELAY on node1 Executing query CREATE TABLE encrypted_test ( id Int64, data String ) ENGINE=MergeTree() ORDER BY id SETTINGS storage_policy='encrypted_policy' on node Executing query CREATE TABLE dist AS data Engine=Distributed( insert_distributed_async_send_cluster_two_shards, currentDatabase(), data, key ) on n1 [gw9] PASSED test_fetch_partition_from_auxiliary_zookeeper/test.py::test_fetch_part_from_allowed_zookeeper[PART-2020-08-28-20200828_0_0_0] test_fetch_partition_from_auxiliary_zookeeper/test.py::test_fetch_part_from_allowed_zookeeper[PARTITION-2020-08-27-2020-08-27] Executing query CREATE TABLE IF NOT EXISTS simple (date Date, id UInt32) ENGINE = ReplicatedMergeTree('/clickhouse/tables/0/simple', 'node') ORDER BY tuple() PARTITION BY date; on node Executing query DROP TABLE IF EXISTS test_auxiliary_zookeeper NO DELAY on node2 Executing query SYSTEM STOP DISTRIBUTED SENDS dist on n1 Executing query CREATE TABLE tbl (p UInt8, d String) ENGINE = ReplicatedMergeTree('/clickhouse/tbl', '1') PARTITION BY p ORDER BY tuple() SETTINGS storage_policy = 'jbod', old_parts_lifetime = 1, cleanup_delay_period = 1, cleanup_delay_period_random_add = 2, cleanup_thread_preferred_points_per_iteration=0, max_bytes_to_merge_at_max_space_in_pool = 4096 on node2 Executing query RESTORE TABLE encrypted_test FROM S3('http://minio1:9001/root/backups/backup7', 'minio', 'minio123') SETTINGS allow_different_table_def=1 on node Executing query INSERT INTO simple VALUES ('2020-08-27', 1) on node Executing query CREATE TABLE test_auxiliary_zookeeper(a Int32) ENGINE = ReplicatedMergeTree('zookeeper2:/clickhouse/tables/test/test_auxiliary_zookeeper', 'node1') ORDER BY a; on node1 Executing query CREATE TABLE data (key Int, value String) Engine=MergeTree() ORDER BY key on n2 Executing query insert into tbl select randConstant() % 2, randomPrintableASCII(16) from numbers(50) on node1 Executing query CREATE TABLE IF NOT EXISTS simple2 (date Date, id UInt32) ENGINE = ReplicatedMergeTree('/clickhouse/tables/1/simple', 'node') ORDER BY tuple() PARTITION BY date; on node Executing query SELECT * FROM encrypted_test ORDER BY id FORMAT Values on node Executing query CREATE TABLE dist AS data Engine=Distributed( insert_distributed_async_send_cluster_two_shards, currentDatabase(), data, key ) on n2 run container_id:roottesthedgedrequestsparallel-gw3-node-1 detach:False nothrow:False cmd: ['bash', '-c', "ps ax | grep 'clickhouse' | grep -v 'grep' | grep -v 'coproc' | grep -v 'bash -c' | awk '{print $1}'"] Command:[docker exec roottesthedgedrequestsparallel-gw3-node-1 bash -c ps ax | grep 'clickhouse' | grep -v 'grep' | grep -v 'coproc' | grep -v 'bash -c' | awk '{print $1}'] Stdout:1511 Executing query insert into tbl select randConstant() % 2, randomPrintableASCII(16) from numbers(50) on node1 Executing query CREATE TABLE test_auxiliary_zookeeper(a Int32) ENGINE = ReplicatedMergeTree('zookeeper2:/clickhouse/tables/test/test_auxiliary_zookeeper', 'node2') ORDER BY a; on node2 Executing query ALTER TABLE simple2 FETCH PARTITION '2020-08-27' FROM 'zookeeper2:/clickhouse/tables/0/simple'; on node Executing query DROP TABLE IF EXISTS encrypted_test SYNC on node [gw1] PASSED test_encrypted_disk/test.py::test_backup_restore[S3-s3_encrypted_default_path-encrypted_policy-False] Executing query SYSTEM STOP DISTRIBUTED SENDS dist on n2 Executing query insert into tbl select randConstant() % 2, randomPrintableASCII(16) from numbers(50) on node1 Executing query ALTER TABLE simple2 ATTACH PARTITION '2020-08-27'; on node run container_id:roottestdropreplicawithauxiliaryzookeepers-gw8-node2-1 detach:False nothrow:True cmd: ['bash', '-c', 'ps -C clickhouse'] Command:[docker exec -u root roottestdropreplicawithauxiliaryzookeepers-gw8-node2-1 bash -c ps -C clickhouse] Executing query CREATE TABLE data (key Int, value String) Engine=MergeTree() ORDER BY key on n3 test_encrypted_disk/test.py::test_backup_restore[S3-s3_encrypted_default_path-s3_encrypted_default_path-False] Executing query CREATE TABLE encrypted_test ( id Int64, data String ) ENGINE=MergeTree() ORDER BY id SETTINGS storage_policy='s3_encrypted_default_path' on node Stdout: PID TTY TIME CMD Stdout: 8 ? 00:00:01 clickhouse run container_id:roottestdropreplicawithauxiliaryzookeepers-gw8-node2-1 detach:False nothrow:False cmd: ['bash', '-c', 'pkill clickhouse'] Command:[docker exec -u root roottestdropreplicawithauxiliaryzookeepers-gw8-node2-1 bash -c pkill clickhouse] run container_id:roottestdropreplicawithauxiliaryzookeepers-gw8-node2-1 detach:False nothrow:False cmd: ['bash', '-c', "ps ax | grep 'clickhouse' | grep -v 'grep' | grep -v 'coproc' | grep -v 'bash -c' | awk '{print $1}'"] Command:[docker exec roottestdropreplicawithauxiliaryzookeepers-gw8-node2-1 bash -c ps ax | grep 'clickhouse' | grep -v 'grep' | grep -v 'coproc' | grep -v 'bash -c' | awk '{print $1}'] Stdout:8 Executing query insert into tbl select randConstant() % 2, randomPrintableASCII(16) from numbers(50) on node1 Executing query ALTER TABLE simple2 FETCH PARTITION '2020-08-27' FROM 'zookeeper:/clickhouse/tables/0/simple'; on node Executing query CREATE TABLE dist AS data Engine=Distributed( insert_distributed_async_send_cluster_two_shards, currentDatabase(), data, key ) on n3 Executing query INSERT INTO encrypted_test VALUES (0,'data'),(1,'data') on node Executing query SELECT id FROM simple2 where date = '2020-08-27' on node Executing query insert into tbl select randConstant() % 2, randomPrintableASCII(16) from numbers(50) on node1 Executing query SYSTEM STOP DISTRIBUTED SENDS dist on n3 Executing query SELECT * FROM encrypted_test ORDER BY id FORMAT Values on node Command:[docker compose --env-file /ClickHouse/tests/integration/test_fetch_partition_from_auxiliary_zookeeper/_instances-0-gw9/.env --project-name roottestfetchpartitionfromauxiliaryzookeeper-gw9 --file /ClickHouse/tests/integration/test_fetch_partition_from_auxiliary_zookeeper/_instances-0-gw9/node/docker-compose.yml --file /ClickHouse/tests/integration/helpers/../../../tests/integration/compose/docker_compose_keeper.yml stop --timeout 20] [gw9] PASSED test_fetch_partition_from_auxiliary_zookeeper/test.py::test_fetch_part_from_allowed_zookeeper[PARTITION-2020-08-27-2020-08-27] Executing query CREATE TABLE data (key Int, value String) Engine=MergeTree() ORDER BY key on n4 Executing query insert into tbl select randConstant() % 2, randomPrintableASCII(16) from numbers(50) on node1 Executing query BACKUP TABLE encrypted_test TO S3('http://minio1:9001/root/backups/backup8', 'minio', 'minio123') SETTINGS decrypt_files_from_encrypted_disks=0 on node run container_id:roottesthedgedrequestsparallel-gw3-node-1 detach:False nothrow:False cmd: ['bash', '-c', "ps ax | grep 'clickhouse' | grep -v 'grep' | grep -v 'coproc' | grep -v 'bash -c' | awk '{print $1}'"] Command:[docker exec roottesthedgedrequestsparallel-gw3-node-1 bash -c ps ax | grep 'clickhouse' | grep -v 'grep' | grep -v 'coproc' | grep -v 'bash -c' | awk '{print $1}'] Stdout:1511 Stdout:2272 Executing query insert into tbl select randConstant() % 2, randomPrintableASCII(16) from numbers(50) on node1 Executing query CREATE TABLE dist AS data Engine=Distributed( insert_distributed_async_send_cluster_two_shards, currentDatabase(), data, key ) on n4 Executing query DROP TABLE encrypted_test SYNC on node Executing query insert into tbl select randConstant() % 2, randomPrintableASCII(16) from numbers(50) on node1 Executing query SYSTEM STOP DISTRIBUTED SENDS dist on n4 run container_id:roottestdropreplicawithauxiliaryzookeepers-gw8-node2-1 detach:False nothrow:False cmd: ['bash', '-c', "ps ax | grep 'clickhouse' | grep -v 'grep' | grep -v 'coproc' | grep -v 'bash -c' | awk '{print $1}'"] Command:[docker exec roottestdropreplicawithauxiliaryzookeepers-gw8-node2-1 bash -c ps ax | grep 'clickhouse' | grep -v 'grep' | grep -v 'coproc' | grep -v 'bash -c' | awk '{print $1}'] Executing query RESTORE TABLE encrypted_test FROM S3('http://minio1:9001/root/backups/backup8', 'minio', 'minio123') SETTINGS allow_different_table_def=0 on node Stdout:8 Executing query insert into tbl select randConstant() % 2, randomPrintableASCII(16) from numbers(50) on node1 Executing query INSERT INTO dist VALUES (0, 'f') on n2 Executing query SELECT * FROM encrypted_test ORDER BY id FORMAT Values on node Executing query insert into tbl select randConstant() % 2, randomPrintableASCII(16) from numbers(50) on node1 Executing query ALTER TABLE dist MODIFY COLUMN value UInt64 on n2 Executing query DROP TABLE IF EXISTS encrypted_test SYNC on node [gw1] PASSED test_encrypted_disk/test.py::test_backup_restore[S3-s3_encrypted_default_path-s3_encrypted_default_path-False] Executing query insert into tbl select randConstant() % 2, randomPrintableASCII(16) from numbers(50) on node1 Executing query INSERT INTO dist VALUES (2, 1) on n2 run container_id:roottesthedgedrequestsparallel-gw3-node-1 detach:False nothrow:False cmd: ['bash', '-c', "ps ax | grep 'clickhouse' | grep -v 'grep' | grep -v 'coproc' | grep -v 'bash -c' | awk '{print $1}'"] Command:[docker exec roottesthedgedrequestsparallel-gw3-node-1 bash -c ps ax | grep 'clickhouse' | grep -v 'grep' | grep -v 'coproc' | grep -v 'bash -c' | awk '{print $1}'] run container_id:roottesthedgedrequestsparallel-gw3-node-1 detach:False nothrow:False cmd: ['bash', '-c', "ps ax | grep 'clickhouse' | grep -v 'grep' | grep -v 'coproc' | grep -v 'bash -c' | awk '{print $1}'"] Command:[docker exec roottesthedgedrequestsparallel-gw3-node-1 bash -c ps ax | grep 'clickhouse' | grep -v 'grep' | grep -v 'coproc' | grep -v 'bash -c' | awk '{print $1}'] No clickhouse process running. Start new one. http://localhost:None "POST /v1.46/containers/roottesthedgedrequestsparallel-gw3-node-1/exec HTTP/1.1" 201 74 http://localhost:None "POST /v1.46/exec/79da76ffc4a39e2279555d2e8fa99d802040d394798c42a5386d09c61e9b8142/start HTTP/1.1" 200 0 http://localhost:None "GET /v1.46/exec/79da76ffc4a39e2279555d2e8fa99d802040d394798c42a5386d09c61e9b8142/json HTTP/1.1" 200 586 test_encrypted_disk/test.py::test_encrypted_disk[encrypted_policy] Executing query CREATE TABLE encrypted_test ( id Int64, data String ) ENGINE=MergeTree() ORDER BY id SETTINGS storage_policy='encrypted_policy' on node Executing query ALTER TABLE data MODIFY COLUMN value UInt64 on n1 Executing query insert into tbl select randConstant() % 2, randomPrintableASCII(16) from numbers(50) on node1 Executing query INSERT INTO encrypted_test VALUES (0,'data'),(1,'data') on node Executing query SYSTEM FLUSH DISTRIBUTED dist on n2 run container_id:roottestdropreplicawithauxiliaryzookeepers-gw8-node2-1 detach:False nothrow:False cmd: ['bash', '-c', "ps ax | grep 'clickhouse' | grep -v 'grep' | grep -v 'coproc' | grep -v 'bash -c' | awk '{print $1}'"] Command:[docker exec roottestdropreplicawithauxiliaryzookeepers-gw8-node2-1 bash -c ps ax | grep 'clickhouse' | grep -v 'grep' | grep -v 'coproc' | grep -v 'bash -c' | awk '{print $1}'] Executing query insert into tbl select randConstant() % 2, randomPrintableASCII(16) from numbers(50) on node1 Stdout:8 Executing query SELECT * FROM encrypted_test ORDER BY id FORMAT Values on node Executing query insert into tbl select randConstant() % 2, randomPrintableASCII(16) from numbers(50) on node1 Executing query SELECT count() FROM data on n1 Executing query INSERT INTO encrypted_test VALUES (2,'data'),(3,'data') on node Executing query DROP TABLE data SYNC; CREATE TABLE data (key Int, value String) Engine=MergeTree() ORDER BY key; on n1 Executing query insert into tbl select randConstant() % 2, randomPrintableASCII(16) from numbers(50) on node1 Executing query OPTIMIZE TABLE encrypted_test FINAL on node run container_id:roottesthedgedrequestsparallel-gw3-node-1 detach:False nothrow:False cmd: ['bash', '-c', "ps ax | grep 'clickhouse' | grep -v 'grep' | grep -v 'coproc' | grep -v 'bash -c' | awk '{print $1}'"] Command:[docker exec roottesthedgedrequestsparallel-gw3-node-1 bash -c ps ax | grep 'clickhouse' | grep -v 'grep' | grep -v 'coproc' | grep -v 'bash -c' | awk '{print $1}'] Executing query SYSTEM FLUSH DISTRIBUTED dist on n2 Stdout:2309 Clickhouse process running. run container_id:roottesthedgedrequestsparallel-gw3-node-1 detach:False nothrow:False cmd: ['bash', '-c', "ps ax | grep 'clickhouse' | grep -v 'grep' | grep -v 'coproc' | grep -v 'bash -c' | awk '{print $1}'"] Command:[docker exec roottesthedgedrequestsparallel-gw3-node-1 bash -c ps ax | grep 'clickhouse' | grep -v 'grep' | grep -v 'coproc' | grep -v 'bash -c' | awk '{print $1}'] Stdout:2309 Executing query select 20 on node Executing query insert into tbl select randConstant() % 2, randomPrintableASCII(16) from numbers(50) on node1 Executing query SELECT * FROM encrypted_test ORDER BY id FORMAT Values on node Executing query SELECT count() FROM data on n1 Executing query insert into tbl select randConstant() % 2, randomPrintableASCII(16) from numbers(50) on node1 run container_id:roottestdropreplicawithauxiliaryzookeepers-gw8-node2-1 detach:False nothrow:False cmd: ['bash', '-c', "ps ax | grep 'clickhouse' | grep -v 'grep' | grep -v 'coproc' | grep -v 'bash -c' | awk '{print $1}'"] Command:[docker exec roottestdropreplicawithauxiliaryzookeepers-gw8-node2-1 bash -c ps ax | grep 'clickhouse' | grep -v 'grep' | grep -v 'coproc' | grep -v 'bash -c' | awk '{print $1}'] Executing query DROP TABLE IF EXISTS encrypted_test SYNC on node [gw1] PASSED test_encrypted_disk/test.py::test_encrypted_disk[encrypted_policy] Executing query SELECT count() FROM data on n2 Executing query insert into tbl select randConstant() % 2, randomPrintableASCII(16) from numbers(50) on node1 test_encrypted_disk/test.py::test_encrypted_disk[encrypted_policy_key192b] Executing query CREATE TABLE encrypted_test ( id Int64, data String ) ENGINE=MergeTree() ORDER BY id SETTINGS storage_policy='encrypted_policy_key192b' on node [gw0] PASSED test_insert_distributed_async_send/test.py::test_insert_distributed_async_send_different_header[0] test_insert_distributed_async_send/test.py::test_insert_distributed_async_send_different_header[1] Executing query DROP TABLE IF EXISTS data on n1 Executing query select 20 on node Executing query INSERT INTO encrypted_test VALUES (0,'data'),(1,'data') on node Executing query insert into tbl select randConstant() % 2, randomPrintableASCII(16) from numbers(50) on node1 Executing query DROP TABLE IF EXISTS dist on n1 Executing query SELECT * FROM distributed on node Executing query SELECT * FROM encrypted_test ORDER BY id FORMAT Values on node Executing query insert into tbl select randConstant() % 2, randomPrintableASCII(16) from numbers(50) on node1 Executing query DROP TABLE IF EXISTS data on n2 Executing query INSERT INTO encrypted_test VALUES (2,'data'),(3,'data') on node Executing query insert into tbl select randConstant() % 2, randomPrintableASCII(16) from numbers(50) on node1 Executing query DROP TABLE IF EXISTS dist on n2 Executing query OPTIMIZE TABLE encrypted_test FINAL on node Executing query DROP TABLE IF EXISTS data on n3 Executing query insert into tbl select randConstant() % 2, randomPrintableASCII(16) from numbers(50) on node1 Executing query SELECT * FROM encrypted_test ORDER BY id FORMAT Values on node Executing query DROP TABLE IF EXISTS dist on n3 Executing query insert into tbl select randConstant() % 2, randomPrintableASCII(16) from numbers(50) on node1 Executing query DROP TABLE IF EXISTS data on n4 Executing query DROP TABLE IF EXISTS encrypted_test SYNC on node [gw1] PASSED test_encrypted_disk/test.py::test_encrypted_disk[encrypted_policy_key192b] Executing query insert into tbl select randConstant() % 2, randomPrintableASCII(16) from numbers(50) on node1 Executing query DROP TABLE IF EXISTS dist on n4 test_encrypted_disk/test.py::test_encrypted_disk[local_policy] Executing query CREATE TABLE encrypted_test ( id Int64, data String ) ENGINE=MergeTree() ORDER BY id SETTINGS storage_policy='local_policy' on node Executing query CREATE TABLE data (key Int, value String) Engine=MergeTree() ORDER BY key on n1 Executing query insert into tbl select randConstant() % 2, randomPrintableASCII(16) from numbers(50) on node1 Executing query INSERT INTO encrypted_test VALUES (0,'data'),(1,'data') on node Executing query CREATE TABLE dist AS data Engine=Distributed( insert_distributed_async_send_cluster_two_shards, currentDatabase(), data, key ) on n1 Executing query insert into tbl select randConstant() % 2, randomPrintableASCII(16) from numbers(50) on node1 Executing query SELECT * FROM encrypted_test ORDER BY id FORMAT Values on node Executing query SYSTEM STOP DISTRIBUTED SENDS dist on n1 Executing query insert into tbl select randConstant() % 2, randomPrintableASCII(16) from numbers(50) on node1 Executing query INSERT INTO encrypted_test VALUES (2,'data'),(3,'data') on node Executing query CREATE TABLE data (key Int, value String) Engine=MergeTree() ORDER BY key on n2 Executing query insert into tbl select randConstant() % 2, randomPrintableASCII(16) from numbers(50) on node1 Executing query OPTIMIZE TABLE encrypted_test FINAL on node Executing query CREATE TABLE dist AS data Engine=Distributed( insert_distributed_async_send_cluster_two_shards, currentDatabase(), data, key ) on n2 Executing query SELECT * FROM encrypted_test ORDER BY id FORMAT Values on node Executing query insert into tbl select randConstant() % 2, randomPrintableASCII(16) from numbers(50) on node1 Executing query SYSTEM STOP DISTRIBUTED SENDS dist on n2 Executing query DROP TABLE IF EXISTS encrypted_test SYNC on node [gw1] PASSED test_encrypted_disk/test.py::test_encrypted_disk[local_policy] Executing query insert into tbl select randConstant() % 2, randomPrintableASCII(16) from numbers(50) on node1 Executing query CREATE TABLE data (key Int, value String) Engine=MergeTree() ORDER BY key on n3 Executing query CREATE TABLE encrypted_test ( id Int64, data String ) ENGINE=MergeTree() ORDER BY id SETTINGS storage_policy='s3_policy' on node test_encrypted_disk/test.py::test_encrypted_disk[s3_policy] Executing query insert into tbl select randConstant() % 2, randomPrintableASCII(16) from numbers(50) on node1 Executing query CREATE TABLE dist AS data Engine=Distributed( insert_distributed_async_send_cluster_two_shards, currentDatabase(), data, key ) on n3 Executing query INSERT INTO encrypted_test VALUES (0,'data'),(1,'data') on node Executing query insert into tbl select randConstant() % 2, randomPrintableASCII(16) from numbers(50) on node1 Executing query SYSTEM STOP DISTRIBUTED SENDS dist on n3 Executing query SELECT * FROM encrypted_test ORDER BY id FORMAT Values on node Executing query CREATE TABLE data (key Int, value String) Engine=MergeTree() ORDER BY key on n4 Executing query insert into tbl select randConstant() % 2, randomPrintableASCII(16) from numbers(50) on node1 Executing query INSERT INTO encrypted_test VALUES (2,'data'),(3,'data') on node Executing query CREATE TABLE dist AS data Engine=Distributed( insert_distributed_async_send_cluster_two_shards, currentDatabase(), data, key ) on n4 Executing query insert into tbl select randConstant() % 2, randomPrintableASCII(16) from numbers(50) on node1 Executing query OPTIMIZE TABLE encrypted_test FINAL on node Executing query SYSTEM STOP DISTRIBUTED SENDS dist on n4 Executing query insert into tbl select randConstant() % 2, randomPrintableASCII(16) from numbers(50) on node1 Executing query SELECT * FROM encrypted_test ORDER BY id FORMAT Values on node Executing query INSERT INTO dist VALUES (0, 'f') on n1 Executing query insert into tbl select randConstant() % 2, randomPrintableASCII(16) from numbers(50) on node1 Executing query ALTER TABLE dist MODIFY COLUMN value UInt64 on n1 Executing query DROP TABLE IF EXISTS encrypted_test SYNC on node [gw1] PASSED test_encrypted_disk/test.py::test_encrypted_disk[s3_policy] Executing query insert into tbl select randConstant() % 2, randomPrintableASCII(16) from numbers(50) on node1 Executing query INSERT INTO dist VALUES (2, 1) on n1 test_encrypted_disk/test.py::test_log_family Executing query SELECT policy_name FROM system.storage_policies on node Executing query insert into tbl select randConstant() % 2, randomPrintableASCII(16) from numbers(50) on node1 Executing query SELECT value FROM system.events WHERE event='HedgedRequestsChangeReplica' on node Executing query ALTER TABLE data MODIFY COLUMN value UInt64 on n1 run container_id:roottestencrypteddisk-gw1-node-1 detach:False nothrow:False cmd: ['bash', '-c', 'cat > /etc/clickhouse-server/config.d/storage_policy_encrypted_policy_multikeys.xml << EOF\n\n\n \n \n encrypted\n disk_local\n encrypted_policy_multikeys_dir/\n firstfirstfirstf\n \n \n \n \n \n
\n encrypted_policy_multikeys_disk\n
\n
\n
\n
\n
\n
\nEOF'] Command:[docker exec roottestencrypteddisk-gw1-node-1 bash -c cat > /etc/clickhouse-server/config.d/storage_policy_encrypted_policy_multikeys.xml << EOF encrypted disk_local encrypted_policy_multikeys_dir/ firstfirstfirstf
encrypted_policy_multikeys_disk
EOF] Executing query insert into tbl select randConstant() % 2, randomPrintableASCII(16) from numbers(50) on node1 Executing query SYSTEM RELOAD CONFIG on node get_instance_ip instance_name=zoo1 http://localhost:None "GET /v1.46/containers/roottestdropreplicawithauxiliaryzookeepers-gw8-zoo1-1/json HTTP/1.1" 200 None get_kazoo_client: zoo1, ip:172.16.10.3, port:2181, use_ssl:False Connecting to 172.16.10.3(172.16.10.3):2181, use_ssl: False Sending request(xid=None): Connect(protocol_version=0, last_zxid_seen=0, time_out=30000, session_id=0, passwd=b'\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00', read_only=None) Zookeeper connection established, state: CONNECTED Sending request(xid=1): Exists(path='/clickhouse/tables/test/test_auxiliary_zookeeper/replicas/node2/is_active', watcher=None) Executing query SYSTEM DROP REPLICA 'node2' on node1 [gw3] PASSED test_hedged_requests_parallel/test.py::test_query_with_no_data_to_sample test_hedged_requests_parallel/test.py::test_send_data Executing query SELECT value FROM system.build_options WHERE name = 'CXX_FLAGS' on node Executing query SYSTEM FLUSH DISTRIBUTED dist on n1 Executing query insert into tbl select randConstant() % 2, randomPrintableASCII(16) from numbers(50) on node1 Executing query SELECT policy_name FROM system.storage_policies WHERE policy_name='encrypted_policy_multikeys' on node Sending request(xid=2): Exists(path='/clickhouse/tables/test/test_auxiliary_zookeeper', watcher=None) Received response(xid=2): ZnodeStat(czxid=4294967334, mzxid=4294967334, ctime=1743560838701, mtime=1743560838701, version=0, cversion=19, aversion=0, ephemeralOwner=0, dataLength=0, numChildren=17, pzxid=4294967348) Sending request(xid=3): Exists(path='/clickhouse/tables/test/test_auxiliary_zookeeper/replicas/node2', watcher=None) Command:[docker compose --env-file /ClickHouse/tests/integration/test_drop_replica_with_auxiliary_zookeepers/_instances-0-gw8/.env --project-name roottestdropreplicawithauxiliaryzookeepers-gw8 --file /ClickHouse/tests/integration/test_drop_replica_with_auxiliary_zookeepers/_instances-0-gw8/node1/docker-compose.yml --file /ClickHouse/tests/integration/helpers/../../../tests/integration/compose/docker_compose_zookeeper.yml --file /ClickHouse/tests/integration/test_drop_replica_with_auxiliary_zookeepers/_instances-0-gw8/node2/docker-compose.yml stop --timeout 20] [gw8] PASSED test_drop_replica_with_auxiliary_zookeepers/test.py::test_drop_replica_in_auxiliary_zookeeper run container_id:roottesthedgedrequestsparallel-gw3-node_1-1 detach:False nothrow:False cmd: ['bash', '-c', "echo '\n \n \n 0\n 30000\n \n \n' > /etc/clickhouse-server/users.d/users1.xml"] Command:[docker exec roottesthedgedrequestsparallel-gw3-node_1-1 bash -c echo ' 0 30000 ' > /etc/clickhouse-server/users.d/users1.xml] run container_id:roottesthedgedrequestsparallel-gw3-node_2-1 detach:False nothrow:False cmd: ['bash', '-c', "echo '\n \n \n 0\n 30000\n \n \n' > /etc/clickhouse-server/users.d/users1.xml"] Command:[docker exec roottesthedgedrequestsparallel-gw3-node_2-1 bash -c echo ' 0 30000 ' > /etc/clickhouse-server/users.d/users1.xml] Executing query insert into tbl select randConstant() % 2, randomPrintableASCII(16) from numbers(50) on node1 run container_id:roottesthedgedrequestsparallel-gw3-node_3-1 detach:False nothrow:False cmd: ['bash', '-c', "echo '\n \n \n 0\n 0\n \n \n' > /etc/clickhouse-server/users.d/users1.xml"] Command:[docker exec roottesthedgedrequestsparallel-gw3-node_3-1 bash -c echo ' 0 0 ' > /etc/clickhouse-server/users.d/users1.xml] Executing query SELECT count() FROM data on n1 Executing query CREATE TABLE encrypted_test ( id Int64, data String ) ENGINE=Log SETTINGS storage_policy='encrypted_policy_multikeys' on node run container_id:roottesthedgedrequestsparallel-gw3-node_4-1 detach:False nothrow:False cmd: ['bash', '-c', "echo '\n \n \n 0\n 0\n \n \n' > /etc/clickhouse-server/users.d/users1.xml"] Command:[docker exec roottesthedgedrequestsparallel-gw3-node_4-1 bash -c echo ' 0 0 ' > /etc/clickhouse-server/users.d/users1.xml] Executing query SELECT value FROM system.settings WHERE name='sleep_in_send_tables_status_ms' on node_1 via HTTP interface Starting new HTTP connection (1): 172.16.4.3:8123 http://172.16.4.3:8123 "GET /?query=SELECT+value+FROM+system.settings+WHERE+name%3D%27sleep_in_send_tables_status_ms%27 HTTP/1.1" 200 None Executing query SELECT value FROM system.settings WHERE name='sleep_in_send_data_ms' on node_1 via HTTP interface Starting new HTTP connection (1): 172.16.4.3:8123 http://172.16.4.3:8123 "GET /?query=SELECT+value+FROM+system.settings+WHERE+name%3D%27sleep_in_send_data_ms%27 HTTP/1.1" 200 None Executing query SELECT value FROM system.settings WHERE name='sleep_in_send_tables_status_ms' on node_2 via HTTP interface Starting new HTTP connection (1): 172.16.4.5:8123 http://172.16.4.5:8123 "GET /?query=SELECT+value+FROM+system.settings+WHERE+name%3D%27sleep_in_send_tables_status_ms%27 HTTP/1.1" 200 None Executing query SELECT value FROM system.settings WHERE name='sleep_in_send_data_ms' on node_2 via HTTP interface Starting new HTTP connection (1): 172.16.4.5:8123 http://172.16.4.5:8123 "GET /?query=SELECT+value+FROM+system.settings+WHERE+name%3D%27sleep_in_send_data_ms%27 HTTP/1.1" 200 None Executing query SELECT value FROM system.settings WHERE name='sleep_in_send_tables_status_ms' on node_3 via HTTP interface Starting new HTTP connection (1): 172.16.4.4:8123 http://172.16.4.4:8123 "GET /?query=SELECT+value+FROM+system.settings+WHERE+name%3D%27sleep_in_send_tables_status_ms%27 HTTP/1.1" 200 None Executing query SELECT value FROM system.settings WHERE name='sleep_in_send_data_ms' on node_3 via HTTP interface Starting new HTTP connection (1): 172.16.4.4:8123 http://172.16.4.4:8123 "GET /?query=SELECT+value+FROM+system.settings+WHERE+name%3D%27sleep_in_send_data_ms%27 HTTP/1.1" 200 None Executing query SELECT value FROM system.settings WHERE name='sleep_in_send_tables_status_ms' on node_4 via HTTP interface Starting new HTTP connection (1): 172.16.4.2:8123 http://172.16.4.2:8123 "GET /?query=SELECT+value+FROM+system.settings+WHERE+name%3D%27sleep_in_send_tables_status_ms%27 HTTP/1.1" 200 None Executing query SELECT value FROM system.settings WHERE name='sleep_in_send_data_ms' on node_4 via HTTP interface Starting new HTTP connection (1): 172.16.4.2:8123 http://172.16.4.2:8123 "GET /?query=SELECT+value+FROM+system.settings+WHERE+name%3D%27sleep_in_send_data_ms%27 HTTP/1.1" 200 None run container_id:roottesthedgedrequestsparallel-gw3-node-1 detach:False nothrow:True cmd: ['bash', '-c', 'ps -C clickhouse'] Command:[docker exec -u root roottesthedgedrequestsparallel-gw3-node-1 bash -c ps -C clickhouse] Executing query DROP TABLE data SYNC; CREATE TABLE data (key Int, value String) Engine=MergeTree() ORDER BY key; on n1 Executing query insert into tbl select randConstant() % 2, randomPrintableASCII(16) from numbers(50) on node1 Executing query INSERT INTO encrypted_test VALUES (0,'data'),(1,'data') on node Stdout: PID TTY TIME CMD Stdout: 2309 ? 00:00:03 clickhouse run container_id:roottesthedgedrequestsparallel-gw3-node-1 detach:False nothrow:False cmd: ['bash', '-c', 'pkill clickhouse'] Command:[docker exec -u root roottesthedgedrequestsparallel-gw3-node-1 bash -c pkill clickhouse] run container_id:roottesthedgedrequestsparallel-gw3-node-1 detach:False nothrow:False cmd: ['bash', '-c', "ps ax | grep 'clickhouse' | grep -v 'grep' | grep -v 'coproc' | grep -v 'bash -c' | awk '{print $1}'"] Command:[docker exec roottesthedgedrequestsparallel-gw3-node-1 bash -c ps ax | grep 'clickhouse' | grep -v 'grep' | grep -v 'coproc' | grep -v 'bash -c' | awk '{print $1}'] Stdout:2309 Executing query SYSTEM FLUSH DISTRIBUTED dist on n1 Executing query insert into tbl select randConstant() % 2, randomPrintableASCII(16) from numbers(50) on node1 Executing query SELECT * FROM encrypted_test ORDER BY id FORMAT Values on node Stderr: node1 Pulling Stderr: node1 Pulled ('Trying to create ClickHouse instance by command %s', 'docker compose --env-file /ClickHouse/tests/integration/test_keeper_incorrect_config/_instances-0-gw7/.env --project-name roottestkeeperincorrectconfig-gw7 --file /ClickHouse/tests/integration/test_keeper_incorrect_config/_instances-0-gw7/node1/docker-compose.yml up -d --no-recreate') Command:[docker compose --env-file /ClickHouse/tests/integration/test_keeper_incorrect_config/_instances-0-gw7/.env --project-name roottestkeeperincorrectconfig-gw7 --file /ClickHouse/tests/integration/test_keeper_incorrect_config/_instances-0-gw7/node1/docker-compose.yml up -d --no-recreate] Connection dropped: socket connection broken Transition to CONNECTING Zookeeper connection lost Stderr: instance Pulling Stderr: instance Pulled ('Trying to create ClickHouse instance by command %s', 'docker compose --env-file /ClickHouse/tests/integration/test_http_native/_instances-0-gw6/.env --project-name roottesthttpnative-gw6 --file /ClickHouse/tests/integration/test_http_native/_instances-0-gw6/instance/docker-compose.yml up -d --no-recreate') Command:[docker compose --env-file /ClickHouse/tests/integration/test_http_native/_instances-0-gw6/.env --project-name roottesthttpnative-gw6 --file /ClickHouse/tests/integration/test_http_native/_instances-0-gw6/instance/docker-compose.yml up -d --no-recreate] Executing query SELECT count() FROM data on n1 run container_id:roottestencrypteddisk-gw1-node-1 detach:False nothrow:False cmd: ['bash', '-c', 'cat > /etc/clickhouse-server/config.d/storage_policy_encrypted_policy_multikeys.xml << EOF\n\n\n \n \n encrypted\n disk_local\n encrypted_policy_multikeys_dir/\n \n firstfirstfirstf\n secondsecondseco\n secondsecondseco\n \n \n \n \n \n \n
\n encrypted_policy_multikeys_disk\n
\n
\n
\n
\n
\n
\nEOF'] Command:[docker exec roottestencrypteddisk-gw1-node-1 bash -c cat > /etc/clickhouse-server/config.d/storage_policy_encrypted_policy_multikeys.xml << EOF encrypted disk_local encrypted_policy_multikeys_dir/ firstfirstfirstf secondsecondseco secondsecondseco
encrypted_policy_multikeys_disk
EOF] Connecting to 172.16.10.3(172.16.10.3):2181, use_ssl: False Connection dropped: socket connection error: Connection refused Executing query insert into tbl select randConstant() % 2, randomPrintableASCII(16) from numbers(50) on node1 Executing query SYSTEM RELOAD CONFIG on node Connecting to 172.16.10.3(172.16.10.3):2181, use_ssl: False Executing query SELECT count() FROM data on n2 Executing query insert into tbl select randConstant() % 2, randomPrintableASCII(16) from numbers(50) on node1 Executing query INSERT INTO encrypted_test VALUES (2,'data'),(3,'data') on node Stderr: Container roottestdropreplicawithauxiliaryzookeepers-gw8-node1-1 Stopping Stderr: Container roottestdropreplicawithauxiliaryzookeepers-gw8-node2-1 Stopping Stderr: Container roottestdropreplicawithauxiliaryzookeepers-gw8-node2-1 Stopped Stderr: Container roottestdropreplicawithauxiliaryzookeepers-gw8-node1-1 Stopped Stderr: Container roottestdropreplicawithauxiliaryzookeepers-gw8-zoo2-1 Stopping Stderr: Container roottestdropreplicawithauxiliaryzookeepers-gw8-zoo3-1 Stopping Stderr: Container roottestdropreplicawithauxiliaryzookeepers-gw8-zoo1-1 Stopping Stderr: Container roottestdropreplicawithauxiliaryzookeepers-gw8-zoo2-1 Stopped Stderr: Container roottestdropreplicawithauxiliaryzookeepers-gw8-zoo3-1 Stopped Stderr: Container roottestdropreplicawithauxiliaryzookeepers-gw8-zoo1-1 Stopped Command:[bash -c [ -f /ClickHouse/tests/integration/test_drop_replica_with_auxiliary_zookeepers/_instances-0-gw8/node1/logs/stderr.log ] && zgrep -aH "==================" /ClickHouse/tests/integration/test_drop_replica_with_auxiliary_zookeepers/_instances-0-gw8/node1/logs/stderr.log* | ( [ -z "" ] && cat || grep -v "$" ) || true] Command:[bash -c [ -f /ClickHouse/tests/integration/test_drop_replica_with_auxiliary_zookeepers/_instances-0-gw8/node2/logs/stderr.log ] && zgrep -aH "==================" /ClickHouse/tests/integration/test_drop_replica_with_auxiliary_zookeepers/_instances-0-gw8/node2/logs/stderr.log* | ( [ -z "" ] && cat || grep -v "$" ) || true] Command:[docker compose --env-file /ClickHouse/tests/integration/test_drop_replica_with_auxiliary_zookeepers/_instances-0-gw8/.env --project-name roottestdropreplicawithauxiliaryzookeepers-gw8 --file /ClickHouse/tests/integration/test_drop_replica_with_auxiliary_zookeepers/_instances-0-gw8/node1/docker-compose.yml --file /ClickHouse/tests/integration/helpers/../../../tests/integration/compose/docker_compose_zookeeper.yml --file /ClickHouse/tests/integration/test_drop_replica_with_auxiliary_zookeepers/_instances-0-gw8/node2/docker-compose.yml down --volumes] Executing query insert into tbl select randConstant() % 2, randomPrintableASCII(16) from numbers(50) on node1 [gw0] PASSED test_insert_distributed_async_send/test.py::test_insert_distributed_async_send_different_header[1] test_insert_distributed_async_send/test.py::test_insert_distributed_async_send_success[0] Executing query DROP TABLE IF EXISTS data on n1 Executing query SELECT * FROM encrypted_test ORDER BY id FORMAT Values on node run container_id:roottesthedgedrequestsparallel-gw3-node-1 detach:False nothrow:False cmd: ['bash', '-c', "ps ax | grep 'clickhouse' | grep -v 'grep' | grep -v 'coproc' | grep -v 'bash -c' | awk '{print $1}'"] Command:[docker exec roottesthedgedrequestsparallel-gw3-node-1 bash -c ps ax | grep 'clickhouse' | grep -v 'grep' | grep -v 'coproc' | grep -v 'bash -c' | awk '{print $1}'] Stdout:2309 Stderr: Network roottestkeeperincorrectconfig-gw7_default Creating Stderr: Network roottestkeeperincorrectconfig-gw7_default Created Stderr: Container roottestkeeperincorrectconfig-gw7-node1-1 Creating Stderr: Container roottestkeeperincorrectconfig-gw7-node1-1 Created Stderr: Container roottestkeeperincorrectconfig-gw7-node1-1 Starting Stderr: Container roottestkeeperincorrectconfig-gw7-node1-1 Started ClickHouse instance created get_instance_ip instance_name=node1 http://localhost:None "GET /v1.46/containers/roottestkeeperincorrectconfig-gw7-node1-1/json HTTP/1.1" 200 None get_instance_ip instance_name=node1 http://localhost:None "GET /v1.46/containers/roottestkeeperincorrectconfig-gw7-node1-1/json HTTP/1.1" 200 None Waiting for ClickHouse start in node1, ip: 172.16.1.2... http://localhost:None "GET /v1.46/containers/roottestkeeperincorrectconfig-gw7-node1-1/json HTTP/1.1" 200 None http://localhost:None "GET /v1.46/containers/81ff77b18fb9d2f1af9f28f5f20365922af6852cc1274082d7acf56c8ba8dc8d/json HTTP/1.1" 200 None Executing query insert into tbl select randConstant() % 2, randomPrintableASCII(16) from numbers(50) on node1 Executing query DROP TABLE IF EXISTS dist on n1 run container_id:roottestencrypteddisk-gw1-node-1 detach:False nothrow:False cmd: ['bash', '-c', 'cat > /etc/clickhouse-server/config.d/storage_policy_encrypted_policy_multikeys.xml << EOF\n\n\n \n \n encrypted\n disk_local\n encrypted_policy_multikeys_dir/\n firstfirstfirstf\n \n \n \n \n \n
\n encrypted_policy_multikeys_disk\n
\n
\n
\n
\n
\n
\nEOF'] Command:[docker exec roottestencrypteddisk-gw1-node-1 bash -c cat > /etc/clickhouse-server/config.d/storage_policy_encrypted_policy_multikeys.xml << EOF encrypted disk_local encrypted_policy_multikeys_dir/ firstfirstfirstf
encrypted_policy_multikeys_disk
EOF] Executing query SYSTEM RELOAD CONFIG on node http://localhost:None "GET /v1.46/containers/81ff77b18fb9d2f1af9f28f5f20365922af6852cc1274082d7acf56c8ba8dc8d/json HTTP/1.1" 200 None Stderr: Container roottestdropreplicawithauxiliaryzookeepers-gw8-node2-1 Stopping Stderr: Container roottestdropreplicawithauxiliaryzookeepers-gw8-node1-1 Stopping Stderr: Container roottestdropreplicawithauxiliaryzookeepers-gw8-node2-1 Stopped Stderr: Container roottestdropreplicawithauxiliaryzookeepers-gw8-node2-1 Removing Stderr: Container roottestdropreplicawithauxiliaryzookeepers-gw8-node1-1 Stopped Stderr: Container roottestdropreplicawithauxiliaryzookeepers-gw8-node1-1 Removing Stderr: Container roottestdropreplicawithauxiliaryzookeepers-gw8-node1-1 Removed Stderr: Container roottestdropreplicawithauxiliaryzookeepers-gw8-node2-1 Removed Stderr: Container roottestdropreplicawithauxiliaryzookeepers-gw8-zoo2-1 Stopping Stderr: Container roottestdropreplicawithauxiliaryzookeepers-gw8-zoo3-1 Stopping Stderr: Container roottestdropreplicawithauxiliaryzookeepers-gw8-zoo1-1 Stopping Stderr: Container roottestdropreplicawithauxiliaryzookeepers-gw8-zoo2-1 Stopped Stderr: Container roottestdropreplicawithauxiliaryzookeepers-gw8-zoo2-1 Removing Stderr: Container roottestdropreplicawithauxiliaryzookeepers-gw8-zoo3-1 Stopped Stderr: Container roottestdropreplicawithauxiliaryzookeepers-gw8-zoo3-1 Removing Stderr: Container roottestdropreplicawithauxiliaryzookeepers-gw8-zoo1-1 Stopped Stderr: Container roottestdropreplicawithauxiliaryzookeepers-gw8-zoo1-1 Removing Stderr: Container roottestdropreplicawithauxiliaryzookeepers-gw8-zoo1-1 Removed Stderr: Container roottestdropreplicawithauxiliaryzookeepers-gw8-zoo2-1 Removed Stderr: Container roottestdropreplicawithauxiliaryzookeepers-gw8-zoo3-1 Removed Stderr: Network roottestdropreplicawithauxiliaryzookeepers-gw8_default Removing Stderr: Network roottestdropreplicawithauxiliaryzookeepers-gw8_default Removed Cleanup called Docker networks for project roottestdropreplicawithauxiliaryzookeepers-gw8 are NETWORK ID NAME DRIVER SCOPE Docker containers for project roottestdropreplicawithauxiliaryzookeepers-gw8 are CONTAINER ID IMAGE COMMAND CREATED STATUS PORTS NAMES Docker volumes for project roottestdropreplicawithauxiliaryzookeepers-gw8 are DRIVER VOLUME NAME Command:[docker container list --all --filter name='^/roottestdropreplicawithauxiliaryzookeepers-gw8-.*-1$' --format '{{.ID}}:{{.Names}}'] Unstopped containers: {} No running containers for project: roottestdropreplicawithauxiliaryzookeepers-gw8 Trying to prune unused networks... http://localhost:None "GET /v1.46/containers/81ff77b18fb9d2f1af9f28f5f20365922af6852cc1274082d7acf56c8ba8dc8d/json HTTP/1.1" 200 None Trying to prune unused images... Command:[docker image prune -f] Stdout:Total reclaimed space: 0B Images pruned Trying to prune unused volumes... Command:[docker volume ls | wc -l] Stderr: Network roottesthttpnative-gw6_default Creating Stderr: Network roottesthttpnative-gw6_default Created Stderr: Container roottesthttpnative-gw6-instance-1 Creating Stderr: Container roottesthttpnative-gw6-instance-1 Created Stderr: Container roottesthttpnative-gw6-instance-1 Starting Stderr: Container roottesthttpnative-gw6-instance-1 Started ClickHouse instance created get_instance_ip instance_name=instance http://localhost:None "GET /v1.46/containers/roottesthttpnative-gw6-instance-1/json HTTP/1.1" 200 None get_instance_ip instance_name=instance http://localhost:None "GET /v1.46/containers/roottesthttpnative-gw6-instance-1/json HTTP/1.1" 200 None Waiting for ClickHouse start in instance, ip: 172.16.2.2... Stdout:3 Command:[docker volume prune -f] http://localhost:None "GET /v1.46/containers/roottesthttpnative-gw6-instance-1/json HTTP/1.1" 200 None http://localhost:None "GET /v1.46/containers/c209fcb10f0fe182362231dd0a86bbb53721a4a2ea40f021adce2b2348020228/json HTTP/1.1" 200 None Stdout:Total reclaimed space: 0B Volumes pruned: 3 test_explain_estimates/test.py::test_explain_estimates Running tests in /ClickHouse/tests/integration/test_explain_estimates/test.py Cluster start called. is_up=False Docker networks for project roottestexplainestimates-gw8 are NETWORK ID NAME DRIVER SCOPE Executing query DROP TABLE IF EXISTS data on n2 Docker containers for project roottestexplainestimates-gw8 are CONTAINER ID IMAGE COMMAND CREATED STATUS PORTS NAMES http://localhost:None "GET /v1.46/containers/81ff77b18fb9d2f1af9f28f5f20365922af6852cc1274082d7acf56c8ba8dc8d/json HTTP/1.1" 200 None Docker volumes for project roottestexplainestimates-gw8 are DRIVER VOLUME NAME Cleanup called Executing query insert into tbl select randConstant() % 2, randomPrintableASCII(16) from numbers(50) on node1 Docker networks for project roottestexplainestimates-gw8 are NETWORK ID NAME DRIVER SCOPE http://localhost:None "GET /v1.46/containers/c209fcb10f0fe182362231dd0a86bbb53721a4a2ea40f021adce2b2348020228/json HTTP/1.1" 200 None Docker containers for project roottestexplainestimates-gw8 are CONTAINER ID IMAGE COMMAND CREATED STATUS PORTS NAMES Docker volumes for project roottestexplainestimates-gw8 are DRIVER VOLUME NAME Command:[docker container list --all --filter name='^/roottestexplainestimates-gw8-.*-1$' --format '{{.ID}}:{{.Names}}'] Executing query SELECT * FROM encrypted_test ORDER BY id FORMAT Values on node Unstopped containers: {} No running containers for project: roottestexplainestimates-gw8 Trying to prune unused networks... http://localhost:None "GET /v1.46/containers/81ff77b18fb9d2f1af9f28f5f20365922af6852cc1274082d7acf56c8ba8dc8d/json HTTP/1.1" 200 None Trying to prune unused images... Command:[docker image prune -f] Stdout:Total reclaimed space: 0B Images pruned Trying to prune unused volumes... Command:[docker volume ls | wc -l] Executing query DROP TABLE IF EXISTS dist on n2 Stdout:3 Command:[docker volume prune -f] http://localhost:None "GET /v1.46/containers/c209fcb10f0fe182362231dd0a86bbb53721a4a2ea40f021adce2b2348020228/json HTTP/1.1" 200 None Stdout:Total reclaimed space: 0B Volumes pruned: 3 Setup directory for instance: instance Create directory for configuration generated in this helper Create directory for common tests configuration Copy common configuration from helpers Generate and write macros file Copy custom test config files [] to /ClickHouse/tests/integration/test_explain_estimates/_instances-0-gw8/instance/configs/config.d Setup database dir /ClickHouse/tests/integration/test_explain_estimates/_instances-0-gw8/instance/database Setup logs dir /ClickHouse/tests/integration/test_explain_estimates/_instances-0-gw8/instance/logs Entrypoint cmd: ["clickhouse", "server", "--config-file=/etc/clickhouse-server/config.xml", "--log-file=/var/log/clickhouse-server/clickhouse-server.log", "--errorlog-file=/var/log/clickhouse-server/clickhouse-server.err.log", "--"] Env {'ASAN_OPTIONS': 'use_sigaltstack=0', 'TSAN_OPTIONS': 'use_sigaltstack=0', 'CLICKHOUSE_WATCHDOG_ENABLE': '0', 'CLICKHOUSE_NATS_TLS_SECURE': '0', 'LLVM_PROFILE_FILE': '/var/lib/clickhouse/server_%h_%p_%m.profraw'} stored in /ClickHouse/tests/integration/test_explain_estimates/_instances-0-gw8/.env Trying paths: ['/root/.docker/config.json', '/root/.dockercfg'] No config file found Trying paths: ['/root/.docker/config.json', '/root/.dockercfg'] No config file found http://localhost:None "GET /version HTTP/1.1" 200 826 Command:[docker compose --env-file /ClickHouse/tests/integration/test_explain_estimates/_instances-0-gw8/.env --project-name roottestexplainestimates-gw8 --file /ClickHouse/tests/integration/test_explain_estimates/_instances-0-gw8/instance/docker-compose.yml pull] http://localhost:None "GET /v1.46/containers/81ff77b18fb9d2f1af9f28f5f20365922af6852cc1274082d7acf56c8ba8dc8d/json HTTP/1.1" 200 None Executing query insert into tbl select randConstant() % 2, randomPrintableASCII(16) from numbers(50) on node1 http://localhost:None "GET /v1.46/containers/c209fcb10f0fe182362231dd0a86bbb53721a4a2ea40f021adce2b2348020228/json HTTP/1.1" 200 None Executing query DROP TABLE IF EXISTS encrypted_test SYNC on node [gw1] PASSED test_encrypted_disk/test.py::test_log_family http://localhost:None "GET /v1.46/containers/81ff77b18fb9d2f1af9f28f5f20365922af6852cc1274082d7acf56c8ba8dc8d/json HTTP/1.1" 200 None http://localhost:None "GET /v1.46/containers/c209fcb10f0fe182362231dd0a86bbb53721a4a2ea40f021adce2b2348020228/json HTTP/1.1" 200 None Executing query DROP TABLE IF EXISTS data on n3 http://localhost:None "GET /v1.46/containers/81ff77b18fb9d2f1af9f28f5f20365922af6852cc1274082d7acf56c8ba8dc8d/json HTTP/1.1" 200 None run container_id:roottesthedgedrequestsparallel-gw3-node-1 detach:False nothrow:False cmd: ['bash', '-c', "ps ax | grep 'clickhouse' | grep -v 'grep' | grep -v 'coproc' | grep -v 'bash -c' | awk '{print $1}'"] Command:[docker exec roottesthedgedrequestsparallel-gw3-node-1 bash -c ps ax | grep 'clickhouse' | grep -v 'grep' | grep -v 'coproc' | grep -v 'bash -c' | awk '{print $1}'] http://localhost:None "GET /v1.46/containers/c209fcb10f0fe182362231dd0a86bbb53721a4a2ea40f021adce2b2348020228/json HTTP/1.1" 200 None Stdout:2309 http://localhost:None "GET /v1.46/containers/81ff77b18fb9d2f1af9f28f5f20365922af6852cc1274082d7acf56c8ba8dc8d/json HTTP/1.1" 200 None http://localhost:None "GET /v1.46/containers/c209fcb10f0fe182362231dd0a86bbb53721a4a2ea40f021adce2b2348020228/json HTTP/1.1" 200 None http://localhost:None "GET /v1.46/containers/81ff77b18fb9d2f1af9f28f5f20365922af6852cc1274082d7acf56c8ba8dc8d/json HTTP/1.1" 200 None http://localhost:None "GET /v1.46/containers/c209fcb10f0fe182362231dd0a86bbb53721a4a2ea40f021adce2b2348020228/json HTTP/1.1" 200 None test_encrypted_disk/test.py::test_migration_from_old_version[version_1be] Executing query SELECT policy_name FROM system.storage_policies on node Executing query insert into tbl select randConstant() % 2, randomPrintableASCII(16) from numbers(50) on node1 http://localhost:None "GET /v1.46/containers/81ff77b18fb9d2f1af9f28f5f20365922af6852cc1274082d7acf56c8ba8dc8d/json HTTP/1.1" 200 None Executing query DROP TABLE IF EXISTS dist on n3 http://localhost:None "GET /v1.46/containers/c209fcb10f0fe182362231dd0a86bbb53721a4a2ea40f021adce2b2348020228/json HTTP/1.1" 200 None http://localhost:None "GET /v1.46/containers/81ff77b18fb9d2f1af9f28f5f20365922af6852cc1274082d7acf56c8ba8dc8d/json HTTP/1.1" 200 None Stderr: Container roottestfetchpartitionfromauxiliaryzookeeper-gw9-node-1 Stopping Stderr: Container roottestfetchpartitionfromauxiliaryzookeeper-gw9-node-1 Stopped Stderr: Container roottestfetchpartitionfromauxiliaryzookeeper-gw9-zoo2-1 Stopping Stderr: Container roottestfetchpartitionfromauxiliaryzookeeper-gw9-zoo3-1 Stopping Stderr: Container roottestfetchpartitionfromauxiliaryzookeeper-gw9-zoo1-1 Stopping Stderr: Container roottestfetchpartitionfromauxiliaryzookeeper-gw9-zoo1-1 Stopped Stderr: Container roottestfetchpartitionfromauxiliaryzookeeper-gw9-zoo2-1 Stopped Stderr: Container roottestfetchpartitionfromauxiliaryzookeeper-gw9-zoo3-1 Stopped Command:[bash -c [ -f /ClickHouse/tests/integration/test_fetch_partition_from_auxiliary_zookeeper/_instances-0-gw9/node/logs/stderr.log ] && zgrep -aH "==================" /ClickHouse/tests/integration/test_fetch_partition_from_auxiliary_zookeeper/_instances-0-gw9/node/logs/stderr.log* | ( [ -z "" ] && cat || grep -v "$" ) || true] Command:[docker compose --env-file /ClickHouse/tests/integration/test_fetch_partition_from_auxiliary_zookeeper/_instances-0-gw9/.env --project-name roottestfetchpartitionfromauxiliaryzookeeper-gw9 --file /ClickHouse/tests/integration/test_fetch_partition_from_auxiliary_zookeeper/_instances-0-gw9/node/docker-compose.yml --file /ClickHouse/tests/integration/helpers/../../../tests/integration/compose/docker_compose_keeper.yml down --volumes] http://localhost:None "GET /v1.46/containers/c209fcb10f0fe182362231dd0a86bbb53721a4a2ea40f021adce2b2348020228/json HTTP/1.1" 200 None http://localhost:None "GET /v1.46/containers/81ff77b18fb9d2f1af9f28f5f20365922af6852cc1274082d7acf56c8ba8dc8d/json HTTP/1.1" 200 None run container_id:roottestencrypteddisk-gw1-node-1 detach:False nothrow:False cmd: ['bash', '-c', 'cat > /etc/clickhouse-server/config.d/storage_policy_migration_from_old_version.xml << EOF\n\n\n \n \n encrypted\n disk_local\n migration_from_old_version_dir/\n \n first_key_first_\n second_key_secon\n third_key_third_\n 3\n \n \n \n \n \n \n
\n migration_from_old_version_disk\n
\n
\n
\n
\n
\n
\nEOF'] Command:[docker exec roottestencrypteddisk-gw1-node-1 bash -c cat > /etc/clickhouse-server/config.d/storage_policy_migration_from_old_version.xml << EOF encrypted disk_local migration_from_old_version_dir/ first_key_first_ second_key_secon third_key_third_ 3
migration_from_old_version_disk
EOF] Executing query SYSTEM SYNC REPLICA tbl on node2 Executing query DROP TABLE IF EXISTS data on n4 http://localhost:None "GET /v1.46/containers/c209fcb10f0fe182362231dd0a86bbb53721a4a2ea40f021adce2b2348020228/json HTTP/1.1" 200 None Executing query SYSTEM RELOAD CONFIG on node http://localhost:None "GET /v1.46/containers/81ff77b18fb9d2f1af9f28f5f20365922af6852cc1274082d7acf56c8ba8dc8d/json HTTP/1.1" 200 None http://localhost:None "GET /v1.46/containers/c209fcb10f0fe182362231dd0a86bbb53721a4a2ea40f021adce2b2348020228/json HTTP/1.1" 200 None http://localhost:None "GET /v1.46/containers/81ff77b18fb9d2f1af9f28f5f20365922af6852cc1274082d7acf56c8ba8dc8d/json HTTP/1.1" 200 None run container_id:roottestjbodha-gw4-node1-1 detach:False nothrow:False cmd: ['bash', '-c', 'mount -t proc proc /jbod1'] Command:[docker exec -u root --privileged roottestjbodha-gw4-node1-1 bash -c mount -t proc proc /jbod1] Executing query DROP TABLE IF EXISTS dist on n4 http://localhost:None "GET /v1.46/containers/c209fcb10f0fe182362231dd0a86bbb53721a4a2ea40f021adce2b2348020228/json HTTP/1.1" 200 None http://localhost:None "GET /v1.46/containers/81ff77b18fb9d2f1af9f28f5f20365922af6852cc1274082d7acf56c8ba8dc8d/json HTTP/1.1" 200 None http://localhost:None "GET /v1.46/containers/c209fcb10f0fe182362231dd0a86bbb53721a4a2ea40f021adce2b2348020228/json HTTP/1.1" 200 None Executing query SELECT policy_name FROM system.storage_policies WHERE policy_name='migration_from_old_version' on node http://localhost:None "GET /v1.46/containers/81ff77b18fb9d2f1af9f28f5f20365922af6852cc1274082d7acf56c8ba8dc8d/json HTTP/1.1" 200 None Executing query CREATE TABLE data (key Int, value String) Engine=MergeTree() ORDER BY key on n1 http://localhost:None "GET /v1.46/containers/c209fcb10f0fe182362231dd0a86bbb53721a4a2ea40f021adce2b2348020228/json HTTP/1.1" 200 None ClickHouse instance started Executing query SELECT toDateTime(1676369730, 'Asia/Shanghai') as dt FORMAT Native on instance via HTTP interface Starting new HTTP connection (1): 172.16.2.2:8123 http://localhost:None "GET /v1.46/containers/81ff77b18fb9d2f1af9f28f5f20365922af6852cc1274082d7acf56c8ba8dc8d/json HTTP/1.1" 200 None run container_id:roottesthedgedrequestsparallel-gw3-node-1 detach:False nothrow:False cmd: ['bash', '-c', "ps ax | grep 'clickhouse' | grep -v 'grep' | grep -v 'coproc' | grep -v 'bash -c' | awk '{print $1}'"] Command:[docker exec roottesthedgedrequestsparallel-gw3-node-1 bash -c ps ax | grep 'clickhouse' | grep -v 'grep' | grep -v 'coproc' | grep -v 'bash -c' | awk '{print $1}'] Stderr: Container roottestfetchpartitionfromauxiliaryzookeeper-gw9-node-1 Stopping Stderr: Container roottestfetchpartitionfromauxiliaryzookeeper-gw9-node-1 Stopped Stderr: Container roottestfetchpartitionfromauxiliaryzookeeper-gw9-node-1 Removing Stderr: Container roottestfetchpartitionfromauxiliaryzookeeper-gw9-node-1 Removed Stderr: Container roottestfetchpartitionfromauxiliaryzookeeper-gw9-zoo3-1 Stopping Stderr: Container roottestfetchpartitionfromauxiliaryzookeeper-gw9-zoo1-1 Stopping Stderr: Container roottestfetchpartitionfromauxiliaryzookeeper-gw9-zoo2-1 Stopping Stderr: Container roottestfetchpartitionfromauxiliaryzookeeper-gw9-zoo3-1 Stopped Stderr: Container roottestfetchpartitionfromauxiliaryzookeeper-gw9-zoo3-1 Removing Stderr: Container roottestfetchpartitionfromauxiliaryzookeeper-gw9-zoo1-1 Stopped Stderr: Container roottestfetchpartitionfromauxiliaryzookeeper-gw9-zoo1-1 Removing Stderr: Container roottestfetchpartitionfromauxiliaryzookeeper-gw9-zoo2-1 Stopped Stderr: Container roottestfetchpartitionfromauxiliaryzookeeper-gw9-zoo2-1 Removing Stderr: Container roottestfetchpartitionfromauxiliaryzookeeper-gw9-zoo3-1 Removed Stderr: Container roottestfetchpartitionfromauxiliaryzookeeper-gw9-zoo1-1 Removed Stderr: Container roottestfetchpartitionfromauxiliaryzookeeper-gw9-zoo2-1 Removed Stderr: Network roottestfetchpartitionfromauxiliaryzookeeper-gw9_default Removing Stderr: Network roottestfetchpartitionfromauxiliaryzookeeper-gw9_default Removed Cleanup called http://172.16.2.2:8123 "GET /?query=SELECT+toDateTime%281676369730%2C+%27Asia%2FShanghai%27%29+as+dt+FORMAT+Native HTTP/1.1" 200 None Executing query SELECT toDateTime(1676369730, 'Asia/Shanghai') as dt FORMAT Native on instance via HTTP interface Starting new HTTP connection (1): 172.16.2.2:8123 http://172.16.2.2:8123 "GET /?client_protocol_version=54337&query=SELECT+toDateTime%281676369730%2C+%27Asia%2FShanghai%27%29+as+dt+FORMAT+Native HTTP/1.1" 200 None Command:[docker compose --env-file /ClickHouse/tests/integration/test_http_native/_instances-0-gw6/.env --project-name roottesthttpnative-gw6 --file /ClickHouse/tests/integration/test_http_native/_instances-0-gw6/instance/docker-compose.yml stop --timeout 20] [gw6] PASSED test_http_native/test.py::test_http_native_returns_timezone Docker networks for project roottestfetchpartitionfromauxiliaryzookeeper-gw9 are NETWORK ID NAME DRIVER SCOPE Docker containers for project roottestfetchpartitionfromauxiliaryzookeeper-gw9 are CONTAINER ID IMAGE COMMAND CREATED STATUS PORTS NAMES Executing query CREATE TABLE encrypted_test ( id Int64, data String ) ENGINE=Log SETTINGS storage_policy='migration_from_old_version' on node Stdout:2309 Docker volumes for project roottestfetchpartitionfromauxiliaryzookeeper-gw9 are DRIVER VOLUME NAME Command:[docker container list --all --filter name='^/roottestfetchpartitionfromauxiliaryzookeeper-gw9-.*-1$' --format '{{.ID}}:{{.Names}}'] Unstopped containers: {} No running containers for project: roottestfetchpartitionfromauxiliaryzookeeper-gw9 Trying to prune unused networks... http://localhost:None "GET /v1.46/containers/81ff77b18fb9d2f1af9f28f5f20365922af6852cc1274082d7acf56c8ba8dc8d/json HTTP/1.1" 200 None Trying to prune unused images... Command:[docker image prune -f] Stdout:Total reclaimed space: 0B Images pruned Trying to prune unused volumes... Command:[docker volume ls | wc -l] Stdout:3 Command:[docker volume prune -f] Executing query CREATE TABLE dist AS data Engine=Distributed( insert_distributed_async_send_cluster_two_replicas, currentDatabase(), data, key ) on n1 Stdout:Total reclaimed space: 0B Volumes pruned: 3 test_keeper_broken_logs/test.py::test_single_node_broken_log Running tests in /ClickHouse/tests/integration/test_keeper_broken_logs/test.py Cluster start called. is_up=False Docker networks for project roottestkeeperbrokenlogs-gw9 are NETWORK ID NAME DRIVER SCOPE Docker containers for project roottestkeeperbrokenlogs-gw9 are CONTAINER ID IMAGE COMMAND CREATED STATUS PORTS NAMES http://localhost:None "GET /v1.46/containers/81ff77b18fb9d2f1af9f28f5f20365922af6852cc1274082d7acf56c8ba8dc8d/json HTTP/1.1" 200 None Docker volumes for project roottestkeeperbrokenlogs-gw9 are DRIVER VOLUME NAME Cleanup called Docker networks for project roottestkeeperbrokenlogs-gw9 are NETWORK ID NAME DRIVER SCOPE Docker containers for project roottestkeeperbrokenlogs-gw9 are CONTAINER ID IMAGE COMMAND CREATED STATUS PORTS NAMES Docker volumes for project roottestkeeperbrokenlogs-gw9 are DRIVER VOLUME NAME Command:[docker container list --all --filter name='^/roottestkeeperbrokenlogs-gw9-.*-1$' --format '{{.ID}}:{{.Names}}'] Executing query SELECT data_paths[1] FROM system.tables WHERE table = 'encrypted_test' on node Unstopped containers: {} No running containers for project: roottestkeeperbrokenlogs-gw9 Trying to prune unused networks... http://localhost:None "GET /v1.46/containers/81ff77b18fb9d2f1af9f28f5f20365922af6852cc1274082d7acf56c8ba8dc8d/json HTTP/1.1" 200 None Trying to prune unused images... Command:[docker image prune -f] Stdout:Total reclaimed space: 0B Images pruned Trying to prune unused volumes... Command:[docker volume ls | wc -l] Stdout:3 Command:[docker volume prune -f] Stdout:Total reclaimed space: 0B Volumes pruned: 3 Setup directory for instance: node1 Create directory for configuration generated in this helper Create directory for common tests configuration Copy common configuration from helpers Generate and write macros file Copy custom test config files ['/ClickHouse/tests/integration/test_keeper_broken_logs/configs/enable_keeper1.xml'] to /ClickHouse/tests/integration/test_keeper_broken_logs/_instances-0-gw9/node1/configs/config.d Setup database dir /ClickHouse/tests/integration/test_keeper_broken_logs/_instances-0-gw9/node1/database Setup logs dir /ClickHouse/tests/integration/test_keeper_broken_logs/_instances-0-gw9/node1/logs Entrypoint cmd: bash -c "trap 'pkill tail' INT TERM; clickhouse server --config-file=/etc/clickhouse-server/config.xml --log-file=/var/log/clickhouse-server/clickhouse-server.log --errorlog-file=/var/log/clickhouse-server/clickhouse-server.err.log --daemon -- ; coproc tail -f /dev/null; wait $$!" Setup directory for instance: node2 Create directory for configuration generated in this helper Create directory for common tests configuration Copy common configuration from helpers Generate and write macros file Copy custom test config files ['/ClickHouse/tests/integration/test_keeper_broken_logs/configs/enable_keeper2.xml'] to /ClickHouse/tests/integration/test_keeper_broken_logs/_instances-0-gw9/node2/configs/config.d Setup database dir /ClickHouse/tests/integration/test_keeper_broken_logs/_instances-0-gw9/node2/database Setup logs dir /ClickHouse/tests/integration/test_keeper_broken_logs/_instances-0-gw9/node2/logs Entrypoint cmd: bash -c "trap 'pkill tail' INT TERM; clickhouse server --config-file=/etc/clickhouse-server/config.xml --log-file=/var/log/clickhouse-server/clickhouse-server.log --errorlog-file=/var/log/clickhouse-server/clickhouse-server.err.log --daemon -- ; coproc tail -f /dev/null; wait $$!" Setup directory for instance: node3 Create directory for configuration generated in this helper Create directory for common tests configuration Copy common configuration from helpers Generate and write macros file Copy custom test config files ['/ClickHouse/tests/integration/test_keeper_broken_logs/configs/enable_keeper3.xml'] to /ClickHouse/tests/integration/test_keeper_broken_logs/_instances-0-gw9/node3/configs/config.d Setup database dir /ClickHouse/tests/integration/test_keeper_broken_logs/_instances-0-gw9/node3/database Setup logs dir /ClickHouse/tests/integration/test_keeper_broken_logs/_instances-0-gw9/node3/logs Entrypoint cmd: bash -c "trap 'pkill tail' INT TERM; clickhouse server --config-file=/etc/clickhouse-server/config.xml --log-file=/var/log/clickhouse-server/clickhouse-server.log --errorlog-file=/var/log/clickhouse-server/clickhouse-server.err.log --daemon -- ; coproc tail -f /dev/null; wait $$!" Env {'ASAN_OPTIONS': 'use_sigaltstack=0', 'TSAN_OPTIONS': 'use_sigaltstack=0', 'CLICKHOUSE_WATCHDOG_ENABLE': '0', 'CLICKHOUSE_NATS_TLS_SECURE': '0', 'LLVM_PROFILE_FILE': '/var/lib/clickhouse/server_%h_%p_%m.profraw'} stored in /ClickHouse/tests/integration/test_keeper_broken_logs/_instances-0-gw9/.env Trying paths: ['/root/.docker/config.json', '/root/.dockercfg'] No config file found Trying paths: ['/root/.docker/config.json', '/root/.dockercfg'] No config file found http://localhost:None "GET /version HTTP/1.1" 200 826 Command:[docker compose --env-file /ClickHouse/tests/integration/test_keeper_broken_logs/_instances-0-gw9/.env --project-name roottestkeeperbrokenlogs-gw9 --file /ClickHouse/tests/integration/test_keeper_broken_logs/_instances-0-gw9/node1/docker-compose.yml --file /ClickHouse/tests/integration/test_keeper_broken_logs/_instances-0-gw9/node2/docker-compose.yml --file /ClickHouse/tests/integration/test_keeper_broken_logs/_instances-0-gw9/node3/docker-compose.yml pull] http://localhost:None "GET /v1.46/containers/81ff77b18fb9d2f1af9f28f5f20365922af6852cc1274082d7acf56c8ba8dc8d/json HTTP/1.1" 200 None Executing query SYSTEM STOP DISTRIBUTED SENDS dist on n1 Executing query DETACH TABLE encrypted_test on node http://localhost:None "GET /v1.46/containers/81ff77b18fb9d2f1af9f28f5f20365922af6852cc1274082d7acf56c8ba8dc8d/json HTTP/1.1" 200 None Executing query CREATE TABLE data (key Int, value String) Engine=MergeTree() ORDER BY key on n2 http://localhost:None "GET /v1.46/containers/81ff77b18fb9d2f1af9f28f5f20365922af6852cc1274082d7acf56c8ba8dc8d/json HTTP/1.1" 200 None run container_id:roottestencrypteddisk-gw1-node-1 detach:False nothrow:False cmd: ['bash', '-c', 'mkdir -p $(dirname /disk/migration_from_old_version_dir/store/b95/b95971b0-5205-454a-8d3f-4f113d0d5547/data.bin) && echo RU5DAAEAAAAAAAAAAAADC5JtbVMXyMVkdnmxSUgniS8AAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAALO67OiCCp6vfEg27xtmmEyp7ueeOj32rmIlM9U2X17KCNpI | base64 --decode > /disk/migration_from_old_version_dir/store/b95/b95971b0-5205-454a-8d3f-4f113d0d5547/data.bin'] Command:[docker exec roottestencrypteddisk-gw1-node-1 bash -c mkdir -p $(dirname /disk/migration_from_old_version_dir/store/b95/b95971b0-5205-454a-8d3f-4f113d0d5547/data.bin) && echo RU5DAAEAAAAAAAAAAAADC5JtbVMXyMVkdnmxSUgniS8AAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAALO67OiCCp6vfEg27xtmmEyp7ueeOj32rmIlM9U2X17KCNpI | base64 --decode > /disk/migration_from_old_version_dir/store/b95/b95971b0-5205-454a-8d3f-4f113d0d5547/data.bin] http://localhost:None "GET /v1.46/containers/81ff77b18fb9d2f1af9f28f5f20365922af6852cc1274082d7acf56c8ba8dc8d/json HTTP/1.1" 200 None run container_id:roottestencrypteddisk-gw1-node-1 detach:False nothrow:False cmd: ['bash', '-c', 'mkdir -p $(dirname /disk/migration_from_old_version_dir/store/b95/b95971b0-5205-454a-8d3f-4f113d0d5547/sizes.json) && echo RU5DAAEAAAAAAAAAAAADC2VO2AwcXS39I9f+/cWaEigAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAGmjpx/3WRUzbtOanx0Wx+PN2w766OrHwS9vUYFqYgFbs+1KlodpLpxusQ28Ia6g2Ga0uZBWaMgY0AbqUskFN+61whlrl7ehyppjDCp0q5xTmojRCo390XHOW5VGGL5QTW47 | base64 --decode > /disk/migration_from_old_version_dir/store/b95/b95971b0-5205-454a-8d3f-4f113d0d5547/sizes.json'] Command:[docker exec roottestencrypteddisk-gw1-node-1 bash -c mkdir -p $(dirname /disk/migration_from_old_version_dir/store/b95/b95971b0-5205-454a-8d3f-4f113d0d5547/sizes.json) && echo RU5DAAEAAAAAAAAAAAADC2VO2AwcXS39I9f+/cWaEigAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAGmjpx/3WRUzbtOanx0Wx+PN2w766OrHwS9vUYFqYgFbs+1KlodpLpxusQ28Ia6g2Ga0uZBWaMgY0AbqUskFN+61whlrl7ehyppjDCp0q5xTmojRCo390XHOW5VGGL5QTW47 | base64 --decode > /disk/migration_from_old_version_dir/store/b95/b95971b0-5205-454a-8d3f-4f113d0d5547/sizes.json] Executing query CREATE TABLE dist AS data Engine=Distributed( insert_distributed_async_send_cluster_two_replicas, currentDatabase(), data, key ) on n2 run container_id:roottestencrypteddisk-gw1-node-1 detach:False nothrow:False cmd: ['bash', '-c', 'mkdir -p $(dirname /disk/migration_from_old_version_dir/store/b95/b95971b0-5205-454a-8d3f-4f113d0d5547/id.bin) && echo RU5DAAEAAAAAAAAAAAADCz2xgWhWqRt4Z3dRErUPqZoAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAABzvWfTKPz5EH2aYOLp1vcx6fHNIYKFiTEBJw46laJ89z1IgzJv2 | base64 --decode > /disk/migration_from_old_version_dir/store/b95/b95971b0-5205-454a-8d3f-4f113d0d5547/id.bin'] Command:[docker exec roottestencrypteddisk-gw1-node-1 bash -c mkdir -p $(dirname /disk/migration_from_old_version_dir/store/b95/b95971b0-5205-454a-8d3f-4f113d0d5547/id.bin) && echo RU5DAAEAAAAAAAAAAAADCz2xgWhWqRt4Z3dRErUPqZoAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAABzvWfTKPz5EH2aYOLp1vcx6fHNIYKFiTEBJw46laJ89z1IgzJv2 | base64 --decode > /disk/migration_from_old_version_dir/store/b95/b95971b0-5205-454a-8d3f-4f113d0d5547/id.bin] http://localhost:None "GET /v1.46/containers/81ff77b18fb9d2f1af9f28f5f20365922af6852cc1274082d7acf56c8ba8dc8d/json HTTP/1.1" 200 None run container_id:roottestencrypteddisk-gw1-node-1 detach:False nothrow:False cmd: ['bash', '-c', 'mkdir -p $(dirname /disk/migration_from_old_version_dir/store/b95/b95971b0-5205-454a-8d3f-4f113d0d5547/__marks.mrk) && echo RU5DAAEAAAAAAAAAAAADC0xOMNALtk9AJbGE5R7iDhsAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAE+RsmouAyqR+MvLcMJKuJN03pXC52884CTTpG5gGwR3 | base64 --decode > /disk/migration_from_old_version_dir/store/b95/b95971b0-5205-454a-8d3f-4f113d0d5547/__marks.mrk'] Command:[docker exec roottestencrypteddisk-gw1-node-1 bash -c mkdir -p $(dirname /disk/migration_from_old_version_dir/store/b95/b95971b0-5205-454a-8d3f-4f113d0d5547/__marks.mrk) && echo RU5DAAEAAAAAAAAAAAADC0xOMNALtk9AJbGE5R7iDhsAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAE+RsmouAyqR+MvLcMJKuJN03pXC52884CTTpG5gGwR3 | base64 --decode > /disk/migration_from_old_version_dir/store/b95/b95971b0-5205-454a-8d3f-4f113d0d5547/__marks.mrk] Executing query ATTACH TABLE encrypted_test on node http://localhost:None "GET /v1.46/containers/81ff77b18fb9d2f1af9f28f5f20365922af6852cc1274082d7acf56c8ba8dc8d/json HTTP/1.1" 200 None Executing query SYSTEM STOP DISTRIBUTED SENDS dist on n2 http://localhost:None "GET /v1.46/containers/81ff77b18fb9d2f1af9f28f5f20365922af6852cc1274082d7acf56c8ba8dc8d/json HTTP/1.1" 200 None run container_id:roottesthedgedrequestsparallel-gw3-node-1 detach:False nothrow:False cmd: ['bash', '-c', "ps ax | grep 'clickhouse' | grep -v 'grep' | grep -v 'coproc' | grep -v 'bash -c' | awk '{print $1}'"] Command:[docker exec roottesthedgedrequestsparallel-gw3-node-1 bash -c ps ax | grep 'clickhouse' | grep -v 'grep' | grep -v 'coproc' | grep -v 'bash -c' | awk '{print $1}'] run container_id:roottesthedgedrequestsparallel-gw3-node-1 detach:False nothrow:False cmd: ['bash', '-c', "ps ax | grep 'clickhouse' | grep -v 'grep' | grep -v 'coproc' | grep -v 'bash -c' | awk '{print $1}'"] Command:[docker exec roottesthedgedrequestsparallel-gw3-node-1 bash -c ps ax | grep 'clickhouse' | grep -v 'grep' | grep -v 'coproc' | grep -v 'bash -c' | awk '{print $1}'] Executing query SELECT * FROM encrypted_test ORDER BY id FORMAT Values on node http://localhost:None "GET /v1.46/containers/81ff77b18fb9d2f1af9f28f5f20365922af6852cc1274082d7acf56c8ba8dc8d/json HTTP/1.1" 200 None No clickhouse process running. Start new one. http://localhost:None "POST /v1.46/containers/roottesthedgedrequestsparallel-gw3-node-1/exec HTTP/1.1" 201 74 http://localhost:None "POST /v1.46/exec/f6cffa44e7287b8ed5c9d4e8c756ac822a178dc910f9031e67ab97b17c24e0e4/start HTTP/1.1" 200 0 http://localhost:None "GET /v1.46/exec/f6cffa44e7287b8ed5c9d4e8c756ac822a178dc910f9031e67ab97b17c24e0e4/json HTTP/1.1" 200 586 Executing query CREATE TABLE data (key Int, value String) Engine=MergeTree() ORDER BY key on n3 http://localhost:None "GET /v1.46/containers/81ff77b18fb9d2f1af9f28f5f20365922af6852cc1274082d7acf56c8ba8dc8d/json HTTP/1.1" 200 None http://localhost:None "GET /v1.46/containers/81ff77b18fb9d2f1af9f28f5f20365922af6852cc1274082d7acf56c8ba8dc8d/json HTTP/1.1" 200 None Executing query INSERT INTO encrypted_test VALUES (2,'xyz') on node http://localhost:None "GET /v1.46/containers/81ff77b18fb9d2f1af9f28f5f20365922af6852cc1274082d7acf56c8ba8dc8d/json HTTP/1.1" 200 None Executing query CREATE TABLE dist AS data Engine=Distributed( insert_distributed_async_send_cluster_two_replicas, currentDatabase(), data, key ) on n3 http://localhost:None "GET /v1.46/containers/81ff77b18fb9d2f1af9f28f5f20365922af6852cc1274082d7acf56c8ba8dc8d/json HTTP/1.1" 200 None Executing query SELECT * FROM encrypted_test ORDER BY id FORMAT Values on node http://localhost:None "GET /v1.46/containers/81ff77b18fb9d2f1af9f28f5f20365922af6852cc1274082d7acf56c8ba8dc8d/json HTTP/1.1" 200 None ClickHouse node1 started run container_id:roottestkeeperincorrectconfig-gw7-node1-1 detach:False nothrow:True cmd: ['bash', '-c', 'ps -C clickhouse'] Command:[docker exec -u root roottestkeeperincorrectconfig-gw7-node1-1 bash -c ps -C clickhouse] Executing query SYSTEM STOP DISTRIBUTED SENDS dist on n3 Stdout: PID TTY TIME CMD Stdout: 8 ? 00:00:01 clickhouse run container_id:roottestkeeperincorrectconfig-gw7-node1-1 detach:False nothrow:False cmd: ['bash', '-c', 'pkill clickhouse'] Command:[docker exec -u root roottestkeeperincorrectconfig-gw7-node1-1 bash -c pkill clickhouse] run container_id:roottestkeeperincorrectconfig-gw7-node1-1 detach:False nothrow:False cmd: ['bash', '-c', "ps ax | grep 'clickhouse' | grep -v 'grep' | grep -v 'coproc' | grep -v 'bash -c' | awk '{print $1}'"] Command:[docker exec roottestkeeperincorrectconfig-gw7-node1-1 bash -c ps ax | grep 'clickhouse' | grep -v 'grep' | grep -v 'coproc' | grep -v 'bash -c' | awk '{print $1}'] Stdout:8 Executing query DROP TABLE IF EXISTS encrypted_test SYNC on node [gw1] PASSED test_encrypted_disk/test.py::test_migration_from_old_version[version_1be] Executing query CREATE TABLE data (key Int, value String) Engine=MergeTree() ORDER BY key on n4 test_encrypted_disk/test.py::test_migration_from_old_version[version_1le] Executing query SELECT policy_name FROM system.storage_policies on node Executing query CREATE TABLE dist AS data Engine=Distributed( insert_distributed_async_send_cluster_two_replicas, currentDatabase(), data, key ) on n4 run container_id:roottesthedgedrequestsparallel-gw3-node-1 detach:False nothrow:False cmd: ['bash', '-c', "ps ax | grep 'clickhouse' | grep -v 'grep' | grep -v 'coproc' | grep -v 'bash -c' | awk '{print $1}'"] Command:[docker exec roottesthedgedrequestsparallel-gw3-node-1 bash -c ps ax | grep 'clickhouse' | grep -v 'grep' | grep -v 'coproc' | grep -v 'bash -c' | awk '{print $1}'] Stdout:3096 Clickhouse process running. run container_id:roottesthedgedrequestsparallel-gw3-node-1 detach:False nothrow:False cmd: ['bash', '-c', "ps ax | grep 'clickhouse' | grep -v 'grep' | grep -v 'coproc' | grep -v 'bash -c' | awk '{print $1}'"] Command:[docker exec roottesthedgedrequestsparallel-gw3-node-1 bash -c ps ax | grep 'clickhouse' | grep -v 'grep' | grep -v 'coproc' | grep -v 'bash -c' | awk '{print $1}'] run container_id:roottestencrypteddisk-gw1-node-1 detach:False nothrow:False cmd: ['bash', '-c', 'cat > /etc/clickhouse-server/config.d/storage_policy_migration_from_old_version.xml << EOF\n\n\n \n \n encrypted\n disk_local\n migration_from_old_version_dir/\n \n first_key_first_\n second_key_secon\n third_key_third_\n 3\n \n \n \n \n \n \n
\n migration_from_old_version_disk\n
\n
\n
\n
\n
\n
\nEOF'] Command:[docker exec roottestencrypteddisk-gw1-node-1 bash -c cat > /etc/clickhouse-server/config.d/storage_policy_migration_from_old_version.xml << EOF encrypted disk_local migration_from_old_version_dir/ first_key_first_ second_key_secon third_key_third_ 3
migration_from_old_version_disk
EOF] Stdout:3096 Executing query select 20 on node Executing query SYSTEM RELOAD CONFIG on node Executing query SYSTEM STOP DISTRIBUTED SENDS dist on n4 Executing query INSERT INTO dist SELECT number, randomPrintableASCII(100) FROM numbers(10000) on n2 Executing query SELECT policy_name FROM system.storage_policies WHERE policy_name='migration_from_old_version' on node run container_id:roottestinsertdistributedasyncsend-gw0-n2-1 detach:False nothrow:False cmd: ['bash', '-c', 'wc -c < /var/lib/clickhouse/data/default/dist/shard1_replica2/2.bin'] Command:[docker exec roottestinsertdistributedasyncsend-gw0-n2-1 bash -c wc -c < /var/lib/clickhouse/data/default/dist/shard1_replica2/2.bin] Stdout:1054718 Executing query SYSTEM FLUSH DISTRIBUTED dist on n2 run container_id:roottestkeeperincorrectconfig-gw7-node1-1 detach:False nothrow:False cmd: ['bash', '-c', "ps ax | grep 'clickhouse' | grep -v 'grep' | grep -v 'coproc' | grep -v 'bash -c' | awk '{print $1}'"] Command:[docker exec roottestkeeperincorrectconfig-gw7-node1-1 bash -c ps ax | grep 'clickhouse' | grep -v 'grep' | grep -v 'coproc' | grep -v 'bash -c' | awk '{print $1}'] Executing query CREATE TABLE encrypted_test ( id Int64, data String ) ENGINE=Log SETTINGS storage_policy='migration_from_old_version' on node Stdout:8 Executing query SELECT count() FROM data on n1 Executing query SELECT data_paths[1] FROM system.tables WHERE table = 'encrypted_test' on node Executing query select 20 on node Executing query SELECT count() FROM data on n2 Executing query SELECT count() FROM distributed on node Executing query DETACH TABLE encrypted_test on node Stderr: Container roottesthttpnative-gw6-instance-1 Stopping Stderr: Container roottesthttpnative-gw6-instance-1 Stopped Command:[bash -c [ -f /ClickHouse/tests/integration/test_http_native/_instances-0-gw6/instance/logs/stderr.log ] && zgrep -aH "==================" /ClickHouse/tests/integration/test_http_native/_instances-0-gw6/instance/logs/stderr.log* | ( [ -z "" ] && cat || grep -v "$" ) || true] Command:[docker compose --env-file /ClickHouse/tests/integration/test_http_native/_instances-0-gw6/.env --project-name roottesthttpnative-gw6 --file /ClickHouse/tests/integration/test_http_native/_instances-0-gw6/instance/docker-compose.yml down --volumes] [gw0] PASSED test_insert_distributed_async_send/test.py::test_insert_distributed_async_send_success[0] test_insert_distributed_async_send/test.py::test_insert_distributed_async_send_success[1] run container_id:roottestencrypteddisk-gw1-node-1 detach:False nothrow:False cmd: ['bash', '-c', 'mkdir -p $(dirname /disk/migration_from_old_version_dir/store/4fa/4faa267d-22f8-4655-bb22-e681fb4afdf6/data.bin) && echo RU5DAQAAAAMAAAAAAAAAC3XsFrGsS7fqU1ItLNwdTe8AAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAACl4S3/gOndRZLjYlV31hsbszXX+FIrxUErgn2zsNrmPIVwH | base64 --decode > /disk/migration_from_old_version_dir/store/4fa/4faa267d-22f8-4655-bb22-e681fb4afdf6/data.bin'] Command:[docker exec roottestencrypteddisk-gw1-node-1 bash -c mkdir -p $(dirname /disk/migration_from_old_version_dir/store/4fa/4faa267d-22f8-4655-bb22-e681fb4afdf6/data.bin) && echo RU5DAQAAAAMAAAAAAAAAC3XsFrGsS7fqU1ItLNwdTe8AAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAACl4S3/gOndRZLjYlV31hsbszXX+FIrxUErgn2zsNrmPIVwH | base64 --decode > /disk/migration_from_old_version_dir/store/4fa/4faa267d-22f8-4655-bb22-e681fb4afdf6/data.bin] Executing query DROP TABLE IF EXISTS data on n1 run container_id:roottestencrypteddisk-gw1-node-1 detach:False nothrow:False cmd: ['bash', '-c', 'mkdir -p $(dirname /disk/migration_from_old_version_dir/store/4fa/4faa267d-22f8-4655-bb22-e681fb4afdf6/sizes.json) && echo RU5DAQAAAAMAAAAAAAAAC9ydiySH3+PebS3LNeRcVr4AAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAFiObABAsAy8a7MF3y3VLTKEBzhP0y03LYgOE7wftbYNAqlH+QrpIo0K1cSprJc+4zB+jgnkTsCS1hk3aPY7YkoW7/+l11AMz/4QT/uMFYu9rFFkbQlQxSEkz4vrO3ZW8Z2N | base64 --decode > /disk/migration_from_old_version_dir/store/4fa/4faa267d-22f8-4655-bb22-e681fb4afdf6/sizes.json'] Command:[docker exec roottestencrypteddisk-gw1-node-1 bash -c mkdir -p $(dirname /disk/migration_from_old_version_dir/store/4fa/4faa267d-22f8-4655-bb22-e681fb4afdf6/sizes.json) && echo RU5DAQAAAAMAAAAAAAAAC9ydiySH3+PebS3LNeRcVr4AAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAFiObABAsAy8a7MF3y3VLTKEBzhP0y03LYgOE7wftbYNAqlH+QrpIo0K1cSprJc+4zB+jgnkTsCS1hk3aPY7YkoW7/+l11AMz/4QT/uMFYu9rFFkbQlQxSEkz4vrO3ZW8Z2N | base64 --decode > /disk/migration_from_old_version_dir/store/4fa/4faa267d-22f8-4655-bb22-e681fb4afdf6/sizes.json] run container_id:roottestencrypteddisk-gw1-node-1 detach:False nothrow:False cmd: ['bash', '-c', 'mkdir -p $(dirname /disk/migration_from_old_version_dir/store/4fa/4faa267d-22f8-4655-bb22-e681fb4afdf6/id.bin) && echo RU5DAQAAAAMAAAAAAAAAC2Wf46Yi+8s88Ra/0OtcK+IAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAJsKRQD+cjGuLOFUCGLSeA7VxnWn5VN7LkHtQo56lXQfGLs+jrkM | base64 --decode > /disk/migration_from_old_version_dir/store/4fa/4faa267d-22f8-4655-bb22-e681fb4afdf6/id.bin'] Command:[docker exec roottestencrypteddisk-gw1-node-1 bash -c mkdir -p $(dirname /disk/migration_from_old_version_dir/store/4fa/4faa267d-22f8-4655-bb22-e681fb4afdf6/id.bin) && echo RU5DAQAAAAMAAAAAAAAAC2Wf46Yi+8s88Ra/0OtcK+IAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAJsKRQD+cjGuLOFUCGLSeA7VxnWn5VN7LkHtQo56lXQfGLs+jrkM | base64 --decode > /disk/migration_from_old_version_dir/store/4fa/4faa267d-22f8-4655-bb22-e681fb4afdf6/id.bin] run container_id:roottestencrypteddisk-gw1-node-1 detach:False nothrow:False cmd: ['bash', '-c', 'mkdir -p $(dirname /disk/migration_from_old_version_dir/store/4fa/4faa267d-22f8-4655-bb22-e681fb4afdf6/__marks.mrk) && echo RU5DAQAAAAMAAAAAAAAAC6HFIYSz06S74XJ4/Rr2Ut0AAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAERHHpil1tvgJKZ5duVyIRc+14n/5fiZp6YMSPsXYWVY | base64 --decode > /disk/migration_from_old_version_dir/store/4fa/4faa267d-22f8-4655-bb22-e681fb4afdf6/__marks.mrk'] Command:[docker exec roottestencrypteddisk-gw1-node-1 bash -c mkdir -p $(dirname /disk/migration_from_old_version_dir/store/4fa/4faa267d-22f8-4655-bb22-e681fb4afdf6/__marks.mrk) && echo RU5DAQAAAAMAAAAAAAAAC6HFIYSz06S74XJ4/Rr2Ut0AAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAERHHpil1tvgJKZ5duVyIRc+14n/5fiZp6YMSPsXYWVY | base64 --decode > /disk/migration_from_old_version_dir/store/4fa/4faa267d-22f8-4655-bb22-e681fb4afdf6/__marks.mrk] Executing query ATTACH TABLE encrypted_test on node Executing query DROP TABLE IF EXISTS dist on n1 run container_id:roottestkeeperincorrectconfig-gw7-node1-1 detach:False nothrow:False cmd: ['bash', '-c', "ps ax | grep 'clickhouse' | grep -v 'grep' | grep -v 'coproc' | grep -v 'bash -c' | awk '{print $1}'"] Command:[docker exec roottestkeeperincorrectconfig-gw7-node1-1 bash -c ps ax | grep 'clickhouse' | grep -v 'grep' | grep -v 'coproc' | grep -v 'bash -c' | awk '{print $1}'] Executing query SELECT * FROM encrypted_test ORDER BY id FORMAT Values on node Stdout:8 Executing query DROP TABLE IF EXISTS data on n2 Stderr: Container roottesthttpnative-gw6-instance-1 Stopping Stderr: Container roottesthttpnative-gw6-instance-1 Stopped Stderr: Container roottesthttpnative-gw6-instance-1 Removing Stderr: Container roottesthttpnative-gw6-instance-1 Removed Stderr: Network roottesthttpnative-gw6_default Removing Stderr: Network roottesthttpnative-gw6_default Removed Cleanup called Docker networks for project roottesthttpnative-gw6 are NETWORK ID NAME DRIVER SCOPE Executing query INSERT INTO encrypted_test VALUES (2,'xyz') on node Docker containers for project roottesthttpnative-gw6 are CONTAINER ID IMAGE COMMAND CREATED STATUS PORTS NAMES Docker volumes for project roottesthttpnative-gw6 are DRIVER VOLUME NAME Command:[docker container list --all --filter name='^/roottesthttpnative-gw6-.*-1$' --format '{{.ID}}:{{.Names}}'] Unstopped containers: {} No running containers for project: roottesthttpnative-gw6 Trying to prune unused networks... Trying to prune unused images... Command:[docker image prune -f] Executing query DROP TABLE IF EXISTS dist on n2 Stdout:Total reclaimed space: 0B Images pruned Trying to prune unused volumes... Command:[docker volume ls | wc -l] Stdout:3 Command:[docker volume prune -f] Stdout:Total reclaimed space: 0B Volumes pruned: 3 Executing query SELECT * FROM encrypted_test ORDER BY id FORMAT Values on node Executing query DROP TABLE IF EXISTS data on n3 Executing query DROP TABLE IF EXISTS dist on n3 Executing query DROP TABLE IF EXISTS encrypted_test SYNC on node [gw1] PASSED test_encrypted_disk/test.py::test_migration_from_old_version[version_1le] Executing query DROP TABLE IF EXISTS data on n4 test_encrypted_disk/test.py::test_migration_from_old_version[version_2] Executing query SELECT policy_name FROM system.storage_policies on node run container_id:roottestkeeperincorrectconfig-gw7-node1-1 detach:False nothrow:False cmd: ['bash', '-c', "ps ax | grep 'clickhouse' | grep -v 'grep' | grep -v 'coproc' | grep -v 'bash -c' | awk '{print $1}'"] Command:[docker exec roottestkeeperincorrectconfig-gw7-node1-1 bash -c ps ax | grep 'clickhouse' | grep -v 'grep' | grep -v 'coproc' | grep -v 'bash -c' | awk '{print $1}'] Stdout:8 Executing query DROP TABLE IF EXISTS dist on n4 run container_id:roottestencrypteddisk-gw1-node-1 detach:False nothrow:False cmd: ['bash', '-c', 'cat > /etc/clickhouse-server/config.d/storage_policy_migration_from_old_version.xml << EOF\n\n\n \n \n encrypted\n disk_local\n migration_from_old_version_dir/\n \n first_key_first_\n second_key_secon\n third_key_third_\n 3\n \n \n \n \n \n \n
\n migration_from_old_version_disk\n
\n
\n
\n
\n
\n
\nEOF'] Command:[docker exec roottestencrypteddisk-gw1-node-1 bash -c cat > /etc/clickhouse-server/config.d/storage_policy_migration_from_old_version.xml << EOF encrypted disk_local migration_from_old_version_dir/ first_key_first_ second_key_secon third_key_third_ 3
migration_from_old_version_disk
EOF] Executing query SYSTEM RELOAD CONFIG on node Executing query CREATE TABLE data (key Int, value String) Engine=MergeTree() ORDER BY key on n1 Executing query SELECT policy_name FROM system.storage_policies WHERE policy_name='migration_from_old_version' on node Executing query CREATE TABLE dist AS data Engine=Distributed( insert_distributed_async_send_cluster_two_replicas, currentDatabase(), data, key ) on n1 Executing query SYSTEM STOP DISTRIBUTED SENDS dist on n1 Executing query CREATE TABLE encrypted_test ( id Int64, data String ) ENGINE=Log SETTINGS storage_policy='migration_from_old_version' on node Executing query SELECT data_paths[1] FROM system.tables WHERE table = 'encrypted_test' on node Executing query CREATE TABLE data (key Int, value String) Engine=MergeTree() ORDER BY key on n2 run container_id:roottestkeeperincorrectconfig-gw7-node1-1 detach:False nothrow:False cmd: ['bash', '-c', "ps ax | grep 'clickhouse' | grep -v 'grep' | grep -v 'coproc' | grep -v 'bash -c' | awk '{print $1}'"] Command:[docker exec roottestkeeperincorrectconfig-gw7-node1-1 bash -c ps ax | grep 'clickhouse' | grep -v 'grep' | grep -v 'coproc' | grep -v 'bash -c' | awk '{print $1}'] run container_id:roottestkeeperincorrectconfig-gw7-node1-1 detach:False nothrow:False cmd: ['bash', '-c', "echo '\n\n \n 9181\n 1\n /var/lib/clickhouse/coordination/log\n /var/lib/clickhouse/coordination/snapshots\n\n \n 5000\n 10000\n trace\n \n\n \n \n 1\n node1\n 9234\n \n \n 2\n node1\n 9234\n \n \n \n\n' > /etc/clickhouse-server/config.d/enable_keeper1.xml"] Command:[docker exec roottestkeeperincorrectconfig-gw7-node1-1 bash -c echo ' 9181 1 /var/lib/clickhouse/coordination/log /var/lib/clickhouse/coordination/snapshots 5000 10000 trace 1 node1 9234 2 node1 9234 ' > /etc/clickhouse-server/config.d/enable_keeper1.xml] Executing query DETACH TABLE encrypted_test on node run container_id:roottestkeeperincorrectconfig-gw7-node1-1 detach:False nothrow:False cmd: ['bash', '-c', "ps ax | grep 'clickhouse' | grep -v 'grep' | grep -v 'coproc' | grep -v 'bash -c' | awk '{print $1}'"] Command:[docker exec roottestkeeperincorrectconfig-gw7-node1-1 bash -c ps ax | grep 'clickhouse' | grep -v 'grep' | grep -v 'coproc' | grep -v 'bash -c' | awk '{print $1}'] Executing query CREATE TABLE dist AS data Engine=Distributed( insert_distributed_async_send_cluster_two_replicas, currentDatabase(), data, key ) on n2 No clickhouse process running. Start new one. http://localhost:None "POST /v1.46/containers/roottestkeeperincorrectconfig-gw7-node1-1/exec HTTP/1.1" 201 74 http://localhost:None "POST /v1.46/exec/a9f437b7ae54466716091a114f6dcde3f91c19b0483c061dd717322243058eaf/start HTTP/1.1" 200 0 http://localhost:None "GET /v1.46/exec/a9f437b7ae54466716091a114f6dcde3f91c19b0483c061dd717322243058eaf/json HTTP/1.1" 200 586 run container_id:roottestencrypteddisk-gw1-node-1 detach:False nothrow:False cmd: ['bash', '-c', 'mkdir -p $(dirname /disk/migration_from_old_version_dir/store/3a6/3a63103a-3012-489e-a919-92044be4b7ca/data.bin) && echo RU5DAgAAAJSGTCyC/dBkeHP+O1u79h3d6aHOVNgqgPS+QSt175LJAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAzDJqKbKDlNNZnPh/TLeXvJq311P3YLPJOyz1cmpAFp/HOb | base64 --decode > /disk/migration_from_old_version_dir/store/3a6/3a63103a-3012-489e-a919-92044be4b7ca/data.bin'] Command:[docker exec roottestencrypteddisk-gw1-node-1 bash -c mkdir -p $(dirname /disk/migration_from_old_version_dir/store/3a6/3a63103a-3012-489e-a919-92044be4b7ca/data.bin) && echo RU5DAgAAAJSGTCyC/dBkeHP+O1u79h3d6aHOVNgqgPS+QSt175LJAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAzDJqKbKDlNNZnPh/TLeXvJq311P3YLPJOyz1cmpAFp/HOb | base64 --decode > /disk/migration_from_old_version_dir/store/3a6/3a63103a-3012-489e-a919-92044be4b7ca/data.bin] Executing query SYSTEM STOP DISTRIBUTED SENDS dist on n2 run container_id:roottestencrypteddisk-gw1-node-1 detach:False nothrow:False cmd: ['bash', '-c', 'mkdir -p $(dirname /disk/migration_from_old_version_dir/store/3a6/3a63103a-3012-489e-a919-92044be4b7ca/sizes.json) && echo RU5DAgAAAJSGTCyC/dBkeHP+O1u79h2POvvrUgd7BKVTpRRSlqz1AAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAC6lkwqgXHwmNC+Myu+XqdCqqlDSk+DkT9k4jxCBPUfHWEaYnnVlCdssvNvM55spcUqp62Q//YCQ9gGbaWK5p624JIZ8SlxsBn0yehEoPrK3gE4HCqpf9wYg8ewATVTFhoNi | base64 --decode > /disk/migration_from_old_version_dir/store/3a6/3a63103a-3012-489e-a919-92044be4b7ca/sizes.json'] Command:[docker exec roottestencrypteddisk-gw1-node-1 bash -c mkdir -p $(dirname /disk/migration_from_old_version_dir/store/3a6/3a63103a-3012-489e-a919-92044be4b7ca/sizes.json) && echo RU5DAgAAAJSGTCyC/dBkeHP+O1u79h2POvvrUgd7BKVTpRRSlqz1AAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAC6lkwqgXHwmNC+Myu+XqdCqqlDSk+DkT9k4jxCBPUfHWEaYnnVlCdssvNvM55spcUqp62Q//YCQ9gGbaWK5p624JIZ8SlxsBn0yehEoPrK3gE4HCqpf9wYg8ewATVTFhoNi | base64 --decode > /disk/migration_from_old_version_dir/store/3a6/3a63103a-3012-489e-a919-92044be4b7ca/sizes.json] run container_id:roottestencrypteddisk-gw1-node-1 detach:False nothrow:False cmd: ['bash', '-c', 'mkdir -p $(dirname /disk/migration_from_old_version_dir/store/3a6/3a63103a-3012-489e-a919-92044be4b7ca/id.bin) && echo RU5DAgAAAJSGTCyC/dBkeHP+O1u79h1HWUiKsaMSJeIa4maLK5nFAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA964BxhK9W1z9bKSndVFEvJljDUzHrUi0qnKKVxINdmgBkd7JPL | base64 --decode > /disk/migration_from_old_version_dir/store/3a6/3a63103a-3012-489e-a919-92044be4b7ca/id.bin'] Command:[docker exec roottestencrypteddisk-gw1-node-1 bash -c mkdir -p $(dirname /disk/migration_from_old_version_dir/store/3a6/3a63103a-3012-489e-a919-92044be4b7ca/id.bin) && echo RU5DAgAAAJSGTCyC/dBkeHP+O1u79h1HWUiKsaMSJeIa4maLK5nFAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA964BxhK9W1z9bKSndVFEvJljDUzHrUi0qnKKVxINdmgBkd7JPL | base64 --decode > /disk/migration_from_old_version_dir/store/3a6/3a63103a-3012-489e-a919-92044be4b7ca/id.bin] run container_id:roottestencrypteddisk-gw1-node-1 detach:False nothrow:False cmd: ['bash', '-c', 'mkdir -p $(dirname /disk/migration_from_old_version_dir/store/3a6/3a63103a-3012-489e-a919-92044be4b7ca/__marks.mrk) && echo RU5DAgAAAJSGTCyC/dBkeHP+O1u79h1CW10PJ1mWuGRxUKxZTpfoAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAALBWY9wVCe+U+xwNJTqAGPGs53YAv2Ri6A6zxJvR75yQ | base64 --decode > /disk/migration_from_old_version_dir/store/3a6/3a63103a-3012-489e-a919-92044be4b7ca/__marks.mrk'] Command:[docker exec roottestencrypteddisk-gw1-node-1 bash -c mkdir -p $(dirname /disk/migration_from_old_version_dir/store/3a6/3a63103a-3012-489e-a919-92044be4b7ca/__marks.mrk) && echo RU5DAgAAAJSGTCyC/dBkeHP+O1u79h1CW10PJ1mWuGRxUKxZTpfoAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAALBWY9wVCe+U+xwNJTqAGPGs53YAv2Ri6A6zxJvR75yQ | base64 --decode > /disk/migration_from_old_version_dir/store/3a6/3a63103a-3012-489e-a919-92044be4b7ca/__marks.mrk] Executing query ATTACH TABLE encrypted_test on node Executing query CREATE TABLE data (key Int, value String) Engine=MergeTree() ORDER BY key on n3 Executing query SELECT * FROM encrypted_test ORDER BY id FORMAT Values on node Executing query CREATE TABLE dist AS data Engine=Distributed( insert_distributed_async_send_cluster_two_replicas, currentDatabase(), data, key ) on n3 Executing query INSERT INTO encrypted_test VALUES (2,'xyz') on node Executing query SYSTEM STOP DISTRIBUTED SENDS dist on n3 Executing query SELECT * FROM encrypted_test ORDER BY id FORMAT Values on node Executing query CREATE TABLE data (key Int, value String) Engine=MergeTree() ORDER BY key on n4 run container_id:roottestkeeperincorrectconfig-gw7-node1-1 detach:False nothrow:False cmd: ['bash', '-c', "ps ax | grep 'clickhouse' | grep -v 'grep' | grep -v 'coproc' | grep -v 'bash -c' | awk '{print $1}'"] Command:[docker exec roottestkeeperincorrectconfig-gw7-node1-1 bash -c ps ax | grep 'clickhouse' | grep -v 'grep' | grep -v 'coproc' | grep -v 'bash -c' | awk '{print $1}'] Stdout:802 Clickhouse process running. run container_id:roottestkeeperincorrectconfig-gw7-node1-1 detach:False nothrow:False cmd: ['bash', '-c', "ps ax | grep 'clickhouse' | grep -v 'grep' | grep -v 'coproc' | grep -v 'bash -c' | awk '{print $1}'"] Command:[docker exec roottestkeeperincorrectconfig-gw7-node1-1 bash -c ps ax | grep 'clickhouse' | grep -v 'grep' | grep -v 'coproc' | grep -v 'bash -c' | awk '{print $1}'] Stdout:802 Executing query select 20 on node1 Executing query CREATE TABLE dist AS data Engine=Distributed( insert_distributed_async_send_cluster_two_replicas, currentDatabase(), data, key ) on n4 Executing query DROP TABLE IF EXISTS encrypted_test SYNC on node [gw1] PASSED test_encrypted_disk/test.py::test_migration_from_old_version[version_2] Executing query SYSTEM STOP DISTRIBUTED SENDS dist on n4 Executing query CREATE TABLE encrypted_test ( id Int64, data String ) ENGINE=MergeTree() ORDER BY id SETTINGS storage_policy='local_policy' on node test_encrypted_disk/test.py::test_optimize_table[local_policy-disk_local_encrypted] Executing query SELECT value FROM system.events WHERE event='HedgedRequestsChangeReplica' on node Executing query INSERT INTO dist SELECT number, randomPrintableASCII(100) FROM numbers(10000) on n1 Executing query INSERT INTO encrypted_test VALUES (0,'data'),(1,'data') on node Executing query select total_space from system.disks where name = 'jbod1' on node1 Executing query SELECT value FROM system.build_options WHERE name = 'CXX_FLAGS' on node [gw3] PASSED test_hedged_requests_parallel/test.py::test_send_data test_hedged_requests_parallel/test.py::test_send_table_status_sleep Executing query select 20 on node1 Executing query SELECT * FROM encrypted_test ORDER BY id FORMAT Values on node run container_id:roottestinsertdistributedasyncsend-gw0-n1-1 detach:False nothrow:False cmd: ['bash', '-c', 'wc -c < /var/lib/clickhouse/data/default/dist/shard1_replica2/2.bin'] Command:[docker exec roottestinsertdistributedasyncsend-gw0-n1-1 bash -c wc -c < /var/lib/clickhouse/data/default/dist/shard1_replica2/2.bin] Executing query select count(p) from tbl on node1 Stdout:1054688 Executing query SYSTEM FLUSH DISTRIBUTED dist on n1 run container_id:roottesthedgedrequestsparallel-gw3-node_1-1 detach:False nothrow:False cmd: ['bash', '-c', "echo '\n \n \n 30000\n 0\n \n \n' > /etc/clickhouse-server/users.d/users1.xml"] Command:[docker exec roottesthedgedrequestsparallel-gw3-node_1-1 bash -c echo ' 30000 0 ' > /etc/clickhouse-server/users.d/users1.xml] run container_id:roottesthedgedrequestsparallel-gw3-node_2-1 detach:False nothrow:False cmd: ['bash', '-c', "echo '\n \n \n 30000\n 0\n \n \n' > /etc/clickhouse-server/users.d/users1.xml"] Command:[docker exec roottesthedgedrequestsparallel-gw3-node_2-1 bash -c echo ' 30000 0 ' > /etc/clickhouse-server/users.d/users1.xml] Executing query ALTER TABLE encrypted_test MOVE PART 'all_1_1_0' TO DISK 'disk_local_encrypted' on node run container_id:roottesthedgedrequestsparallel-gw3-node_3-1 detach:False nothrow:False cmd: ['bash', '-c', "echo '\n \n \n 0\n 0\n \n \n' > /etc/clickhouse-server/users.d/users1.xml"] Command:[docker exec roottesthedgedrequestsparallel-gw3-node_3-1 bash -c echo ' 0 0 ' > /etc/clickhouse-server/users.d/users1.xml] run container_id:roottesthedgedrequestsparallel-gw3-node_4-1 detach:False nothrow:False cmd: ['bash', '-c', "echo '\n \n \n 0\n 0\n \n \n' > /etc/clickhouse-server/users.d/users1.xml"] Command:[docker exec roottesthedgedrequestsparallel-gw3-node_4-1 bash -c echo ' 0 0 ' > /etc/clickhouse-server/users.d/users1.xml] run container_id:roottestjbodha-gw4-node1-1 detach:False nothrow:False cmd: ['bash', '-c', 'umount /jbod1'] Command:[docker exec -u root --privileged roottestjbodha-gw4-node1-1 bash -c umount /jbod1] Executing query SELECT count() FROM data on n1 Executing query SELECT value FROM system.settings WHERE name='sleep_in_send_tables_status_ms' on node_1 via HTTP interface Starting new HTTP connection (1): 172.16.4.3:8123 run container_id:roottestjbodha-gw4-node1-1 detach:False nothrow:True cmd: ['bash', '-c', 'ps -C clickhouse'] Command:[docker exec -u root roottestjbodha-gw4-node1-1 bash -c ps -C clickhouse] http://172.16.4.3:8123 "GET /?query=SELECT+value+FROM+system.settings+WHERE+name%3D%27sleep_in_send_tables_status_ms%27 HTTP/1.1" 200 None Executing query SELECT value FROM system.settings WHERE name='sleep_in_send_data_ms' on node_1 via HTTP interface Starting new HTTP connection (1): 172.16.4.3:8123 http://172.16.4.3:8123 "GET /?query=SELECT+value+FROM+system.settings+WHERE+name%3D%27sleep_in_send_data_ms%27 HTTP/1.1" 200 None Stdout: PID TTY TIME CMD Stdout: 8 ? 00:00:05 clickhouse run container_id:roottestjbodha-gw4-node1-1 detach:False nothrow:False cmd: ['bash', '-c', 'pkill clickhouse'] Command:[docker exec -u root roottestjbodha-gw4-node1-1 bash -c pkill clickhouse] Executing query SELECT * FROM encrypted_test ORDER BY id FORMAT Values on node run container_id:roottestjbodha-gw4-node1-1 detach:False nothrow:False cmd: ['bash', '-c', "ps ax | grep 'clickhouse' | grep -v 'grep' | grep -v 'coproc' | grep -v 'bash -c' | awk '{print $1}'"] Command:[docker exec roottestjbodha-gw4-node1-1 bash -c ps ax | grep 'clickhouse' | grep -v 'grep' | grep -v 'coproc' | grep -v 'bash -c' | awk '{print $1}'] Executing query SELECT value FROM system.settings WHERE name='sleep_in_send_tables_status_ms' on node_1 via HTTP interface Starting new HTTP connection (1): 172.16.4.3:8123 Stdout:8 http://172.16.4.3:8123 "GET /?query=SELECT+value+FROM+system.settings+WHERE+name%3D%27sleep_in_send_tables_status_ms%27 HTTP/1.1" 200 None Executing query SELECT value FROM system.settings WHERE name='sleep_in_send_data_ms' on node_1 via HTTP interface Starting new HTTP connection (1): 172.16.4.3:8123 http://172.16.4.3:8123 "GET /?query=SELECT+value+FROM+system.settings+WHERE+name%3D%27sleep_in_send_data_ms%27 HTTP/1.1" 200 None Executing query SELECT count() FROM data on n2 Executing query select 20 on node1 Executing query SELECT value FROM system.settings WHERE name='sleep_in_send_tables_status_ms' on node_1 via HTTP interface Executing query INSERT INTO encrypted_test VALUES (2,'data'),(3,'data') on node Starting new HTTP connection (1): 172.16.4.3:8123 http://172.16.4.3:8123 "GET /?query=SELECT+value+FROM+system.settings+WHERE+name%3D%27sleep_in_send_tables_status_ms%27 HTTP/1.1" 200 None Executing query SELECT value FROM system.settings WHERE name='sleep_in_send_data_ms' on node_1 via HTTP interface Starting new HTTP connection (1): 172.16.4.3:8123 http://172.16.4.3:8123 "GET /?query=SELECT+value+FROM+system.settings+WHERE+name%3D%27sleep_in_send_data_ms%27 HTTP/1.1" 200 None [gw0] PASSED test_insert_distributed_async_send/test.py::test_insert_distributed_async_send_success[1] test_insert_distributed_async_send/test.py::test_insert_distributed_async_send_truncated_1[0-0] Executing query DROP TABLE IF EXISTS data on n1 Executing query SELECT value FROM system.settings WHERE name='sleep_in_send_tables_status_ms' on node_1 via HTTP interface Starting new HTTP connection (1): 172.16.4.3:8123 http://172.16.4.3:8123 "GET /?query=SELECT+value+FROM+system.settings+WHERE+name%3D%27sleep_in_send_tables_status_ms%27 HTTP/1.1" 200 None Executing query SELECT value FROM system.settings WHERE name='sleep_in_send_data_ms' on node_1 via HTTP interface Starting new HTTP connection (1): 172.16.4.3:8123 http://172.16.4.3:8123 "GET /?query=SELECT+value+FROM+system.settings+WHERE+name%3D%27sleep_in_send_data_ms%27 HTTP/1.1" 200 None Executing query OPTIMIZE TABLE encrypted_test FINAL on node Executing query SELECT value FROM system.settings WHERE name='sleep_in_send_tables_status_ms' on node_1 via HTTP interface Starting new HTTP connection (1): 172.16.4.3:8123 http://172.16.4.3:8123 "GET /?query=SELECT+value+FROM+system.settings+WHERE+name%3D%27sleep_in_send_tables_status_ms%27 HTTP/1.1" 200 None Executing query SELECT value FROM system.settings WHERE name='sleep_in_send_data_ms' on node_1 via HTTP interface Starting new HTTP connection (1): 172.16.4.3:8123 http://172.16.4.3:8123 "GET /?query=SELECT+value+FROM+system.settings+WHERE+name%3D%27sleep_in_send_data_ms%27 HTTP/1.1" 200 None Executing query DROP TABLE IF EXISTS dist on n1 Executing query ALTER TABLE encrypted_test MOVE PART 'all_1_1_0' TO DISK 'disk_local_encrypted' on node Executing query SELECT value FROM system.settings WHERE name='sleep_in_send_tables_status_ms' on node_1 via HTTP interface Starting new HTTP connection (1): 172.16.4.3:8123 http://172.16.4.3:8123 "GET /?query=SELECT+value+FROM+system.settings+WHERE+name%3D%27sleep_in_send_tables_status_ms%27 HTTP/1.1" 200 None Executing query SELECT value FROM system.settings WHERE name='sleep_in_send_data_ms' on node_1 via HTTP interface Starting new HTTP connection (1): 172.16.4.3:8123 http://172.16.4.3:8123 "GET /?query=SELECT+value+FROM+system.settings+WHERE+name%3D%27sleep_in_send_data_ms%27 HTTP/1.1" 200 None Executing query DROP TABLE IF EXISTS data on n2 Executing query SELECT value FROM system.settings WHERE name='sleep_in_send_tables_status_ms' on node_1 via HTTP interface Starting new HTTP connection (1): 172.16.4.3:8123 http://172.16.4.3:8123 "GET /?query=SELECT+value+FROM+system.settings+WHERE+name%3D%27sleep_in_send_tables_status_ms%27 HTTP/1.1" 200 None Executing query SELECT value FROM system.settings WHERE name='sleep_in_send_data_ms' on node_1 via HTTP interface Starting new HTTP connection (1): 172.16.4.3:8123 http://172.16.4.3:8123 "GET /?query=SELECT+value+FROM+system.settings+WHERE+name%3D%27sleep_in_send_data_ms%27 HTTP/1.1" 200 None Executing query SELECT * FROM encrypted_test ORDER BY id FORMAT Values on node Executing query select 20 on node1 Executing query DROP TABLE IF EXISTS dist on n2 Executing query SELECT value FROM system.settings WHERE name='sleep_in_send_tables_status_ms' on node_1 via HTTP interface Starting new HTTP connection (1): 172.16.4.3:8123 http://172.16.4.3:8123 "GET /?query=SELECT+value+FROM+system.settings+WHERE+name%3D%27sleep_in_send_tables_status_ms%27 HTTP/1.1" 200 None Executing query SELECT value FROM system.settings WHERE name='sleep_in_send_data_ms' on node_1 via HTTP interface Starting new HTTP connection (1): 172.16.4.3:8123 http://172.16.4.3:8123 "GET /?query=SELECT+value+FROM+system.settings+WHERE+name%3D%27sleep_in_send_data_ms%27 HTTP/1.1" 200 None run container_id:roottestjbodha-gw4-node1-1 detach:False nothrow:False cmd: ['bash', '-c', "ps ax | grep 'clickhouse' | grep -v 'grep' | grep -v 'coproc' | grep -v 'bash -c' | awk '{print $1}'"] Command:[docker exec roottestjbodha-gw4-node1-1 bash -c ps ax | grep 'clickhouse' | grep -v 'grep' | grep -v 'coproc' | grep -v 'bash -c' | awk '{print $1}'] Executing query DROP TABLE IF EXISTS encrypted_test SYNC on node [gw1] PASSED test_encrypted_disk/test.py::test_optimize_table[local_policy-disk_local_encrypted] Executing query SELECT value FROM system.settings WHERE name='sleep_in_send_tables_status_ms' on node_1 via HTTP interface Starting new HTTP connection (1): 172.16.4.3:8123 http://172.16.4.3:8123 "GET /?query=SELECT+value+FROM+system.settings+WHERE+name%3D%27sleep_in_send_tables_status_ms%27 HTTP/1.1" 200 None Executing query SELECT value FROM system.settings WHERE name='sleep_in_send_data_ms' on node_1 via HTTP interface Starting new HTTP connection (1): 172.16.4.3:8123 Executing query DROP TABLE IF EXISTS data on n3 Stdout:8 http://172.16.4.3:8123 "GET /?query=SELECT+value+FROM+system.settings+WHERE+name%3D%27sleep_in_send_data_ms%27 HTTP/1.1" 200 None Stderr: instance Pulling Stderr: instance Pulled ('Trying to create ClickHouse instance by command %s', 'docker compose --env-file /ClickHouse/tests/integration/test_explain_estimates/_instances-0-gw8/.env --project-name roottestexplainestimates-gw8 --file /ClickHouse/tests/integration/test_explain_estimates/_instances-0-gw8/instance/docker-compose.yml up -d --no-recreate') Command:[docker compose --env-file /ClickHouse/tests/integration/test_explain_estimates/_instances-0-gw8/.env --project-name roottestexplainestimates-gw8 --file /ClickHouse/tests/integration/test_explain_estimates/_instances-0-gw8/instance/docker-compose.yml up -d --no-recreate] Stderr: node3 Skipped - Image is already being pulled by node2 Stderr: node1 Skipped - Image is already being pulled by node2 Stderr: node2 Pulling Stderr: node2 Pulled ('Trying to create ClickHouse instance by command %s', 'docker compose --env-file /ClickHouse/tests/integration/test_keeper_broken_logs/_instances-0-gw9/.env --project-name roottestkeeperbrokenlogs-gw9 --file /ClickHouse/tests/integration/test_keeper_broken_logs/_instances-0-gw9/node1/docker-compose.yml --file /ClickHouse/tests/integration/test_keeper_broken_logs/_instances-0-gw9/node2/docker-compose.yml --file /ClickHouse/tests/integration/test_keeper_broken_logs/_instances-0-gw9/node3/docker-compose.yml up -d --no-recreate') Command:[docker compose --env-file /ClickHouse/tests/integration/test_keeper_broken_logs/_instances-0-gw9/.env --project-name roottestkeeperbrokenlogs-gw9 --file /ClickHouse/tests/integration/test_keeper_broken_logs/_instances-0-gw9/node1/docker-compose.yml --file /ClickHouse/tests/integration/test_keeper_broken_logs/_instances-0-gw9/node2/docker-compose.yml --file /ClickHouse/tests/integration/test_keeper_broken_logs/_instances-0-gw9/node3/docker-compose.yml up -d --no-recreate] Executing query SELECT value FROM system.settings WHERE name='sleep_in_send_tables_status_ms' on node_1 via HTTP interface Starting new HTTP connection (1): 172.16.4.3:8123 http://172.16.4.3:8123 "GET /?query=SELECT+value+FROM+system.settings+WHERE+name%3D%27sleep_in_send_tables_status_ms%27 HTTP/1.1" 200 None Executing query SELECT value FROM system.settings WHERE name='sleep_in_send_data_ms' on node_1 via HTTP interface Starting new HTTP connection (1): 172.16.4.3:8123 http://172.16.4.3:8123 "GET /?query=SELECT+value+FROM+system.settings+WHERE+name%3D%27sleep_in_send_data_ms%27 HTTP/1.1" 200 None test_encrypted_disk/test.py::test_optimize_table[s3_policy-disk_s3_encrypted] Executing query CREATE TABLE encrypted_test ( id Int64, data String ) ENGINE=MergeTree() ORDER BY id SETTINGS storage_policy='s3_policy' on node Executing query DROP TABLE IF EXISTS dist on n3 Executing query SELECT value FROM system.settings WHERE name='sleep_in_send_tables_status_ms' on node_1 via HTTP interface Starting new HTTP connection (1): 172.16.4.3:8123 http://172.16.4.3:8123 "GET /?query=SELECT+value+FROM+system.settings+WHERE+name%3D%27sleep_in_send_tables_status_ms%27 HTTP/1.1" 200 None Executing query SELECT value FROM system.settings WHERE name='sleep_in_send_data_ms' on node_1 via HTTP interface Starting new HTTP connection (1): 172.16.4.3:8123 http://172.16.4.3:8123 "GET /?query=SELECT+value+FROM+system.settings+WHERE+name%3D%27sleep_in_send_data_ms%27 HTTP/1.1" 200 None Executing query DROP TABLE IF EXISTS data on n4 Executing query SELECT value FROM system.settings WHERE name='sleep_in_send_tables_status_ms' on node_1 via HTTP interface Executing query INSERT INTO encrypted_test VALUES (0,'data'),(1,'data') on node Starting new HTTP connection (1): 172.16.4.3:8123 http://172.16.4.3:8123 "GET /?query=SELECT+value+FROM+system.settings+WHERE+name%3D%27sleep_in_send_tables_status_ms%27 HTTP/1.1" 200 None Executing query select 20 on node1 Executing query SELECT value FROM system.settings WHERE name='sleep_in_send_data_ms' on node_1 via HTTP interface Starting new HTTP connection (1): 172.16.4.3:8123 http://172.16.4.3:8123 "GET /?query=SELECT+value+FROM+system.settings+WHERE+name%3D%27sleep_in_send_data_ms%27 HTTP/1.1" 200 None Executing query DROP TABLE IF EXISTS dist on n4 Executing query SELECT value FROM system.settings WHERE name='sleep_in_send_tables_status_ms' on node_1 via HTTP interface Starting new HTTP connection (1): 172.16.4.3:8123 http://172.16.4.3:8123 "GET /?query=SELECT+value+FROM+system.settings+WHERE+name%3D%27sleep_in_send_tables_status_ms%27 HTTP/1.1" 200 None Executing query SELECT value FROM system.settings WHERE name='sleep_in_send_data_ms' on node_1 via HTTP interface Starting new HTTP connection (1): 172.16.4.3:8123 http://172.16.4.3:8123 "GET /?query=SELECT+value+FROM+system.settings+WHERE+name%3D%27sleep_in_send_data_ms%27 HTTP/1.1" 200 None Executing query SELECT value FROM system.settings WHERE name='sleep_in_send_tables_status_ms' on node_2 via HTTP interface Starting new HTTP connection (1): 172.16.4.5:8123 http://172.16.4.5:8123 "GET /?query=SELECT+value+FROM+system.settings+WHERE+name%3D%27sleep_in_send_tables_status_ms%27 HTTP/1.1" 200 None Executing query SELECT value FROM system.settings WHERE name='sleep_in_send_data_ms' on node_2 via HTTP interface Starting new HTTP connection (1): 172.16.4.5:8123 Stderr: Network roottestexplainestimates-gw8_default Creating Stderr: Network roottestexplainestimates-gw8_default Created Stderr: Container roottestexplainestimates-gw8-instance-1 Creating Stderr: Container roottestexplainestimates-gw8-instance-1 Created Stderr: Container roottestexplainestimates-gw8-instance-1 Starting Stderr: Container roottestexplainestimates-gw8-instance-1 Started ClickHouse instance created get_instance_ip instance_name=instance http://localhost:None "GET /v1.46/containers/roottestexplainestimates-gw8-instance-1/json HTTP/1.1" 200 None get_instance_ip instance_name=instance http://localhost:None "GET /v1.46/containers/roottestexplainestimates-gw8-instance-1/json HTTP/1.1" 200 None Waiting for ClickHouse start in instance, ip: 172.16.2.2... http://localhost:None "GET /v1.46/containers/roottestexplainestimates-gw8-instance-1/json HTTP/1.1" 200 None http://localhost:None "GET /v1.46/containers/e94fb2c79e395593fb23b35e219fce990ff38008d65dc7fc43aa597bb20acbdb/json HTTP/1.1" 200 None http://172.16.4.5:8123 "GET /?query=SELECT+value+FROM+system.settings+WHERE+name%3D%27sleep_in_send_data_ms%27 HTTP/1.1" 200 None Executing query SELECT value FROM system.settings WHERE name='sleep_in_send_tables_status_ms' on node_3 via HTTP interface Starting new HTTP connection (1): 172.16.4.4:8123 http://172.16.4.4:8123 "GET /?query=SELECT+value+FROM+system.settings+WHERE+name%3D%27sleep_in_send_tables_status_ms%27 HTTP/1.1" 200 None Executing query SELECT value FROM system.settings WHERE name='sleep_in_send_data_ms' on node_3 via HTTP interface Starting new HTTP connection (1): 172.16.4.4:8123 http://172.16.4.4:8123 "GET /?query=SELECT+value+FROM+system.settings+WHERE+name%3D%27sleep_in_send_data_ms%27 HTTP/1.1" 200 None Executing query SELECT value FROM system.settings WHERE name='sleep_in_send_tables_status_ms' on node_4 via HTTP interface Starting new HTTP connection (1): 172.16.4.2:8123 Executing query SELECT * FROM encrypted_test ORDER BY id FORMAT Values on node http://172.16.4.2:8123 "GET /?query=SELECT+value+FROM+system.settings+WHERE+name%3D%27sleep_in_send_tables_status_ms%27 HTTP/1.1" 200 None Executing query SELECT value FROM system.settings WHERE name='sleep_in_send_data_ms' on node_4 via HTTP interface Starting new HTTP connection (1): 172.16.4.2:8123 http://localhost:None "GET /v1.46/containers/e94fb2c79e395593fb23b35e219fce990ff38008d65dc7fc43aa597bb20acbdb/json HTTP/1.1" 200 None http://172.16.4.2:8123 "GET /?query=SELECT+value+FROM+system.settings+WHERE+name%3D%27sleep_in_send_data_ms%27 HTTP/1.1" 200 None Executing query CREATE TABLE data (key Int, value String) Engine=MergeTree() ORDER BY key on n1 run container_id:roottesthedgedrequestsparallel-gw3-node-1 detach:False nothrow:True cmd: ['bash', '-c', 'ps -C clickhouse'] Command:[docker exec -u root roottesthedgedrequestsparallel-gw3-node-1 bash -c ps -C clickhouse] Stdout: PID TTY TIME CMD Stdout: 3096 ? 00:00:03 clickhouse run container_id:roottesthedgedrequestsparallel-gw3-node-1 detach:False nothrow:False cmd: ['bash', '-c', 'pkill clickhouse'] Command:[docker exec -u root roottesthedgedrequestsparallel-gw3-node-1 bash -c pkill clickhouse] http://localhost:None "GET /v1.46/containers/e94fb2c79e395593fb23b35e219fce990ff38008d65dc7fc43aa597bb20acbdb/json HTTP/1.1" 200 None run container_id:roottesthedgedrequestsparallel-gw3-node-1 detach:False nothrow:False cmd: ['bash', '-c', "ps ax | grep 'clickhouse' | grep -v 'grep' | grep -v 'coproc' | grep -v 'bash -c' | awk '{print $1}'"] Command:[docker exec roottesthedgedrequestsparallel-gw3-node-1 bash -c ps ax | grep 'clickhouse' | grep -v 'grep' | grep -v 'coproc' | grep -v 'bash -c' | awk '{print $1}'] Stdout:3096 http://localhost:None "GET /v1.46/containers/e94fb2c79e395593fb23b35e219fce990ff38008d65dc7fc43aa597bb20acbdb/json HTTP/1.1" 200 None run container_id:roottestjbodha-gw4-node1-1 detach:False nothrow:False cmd: ['bash', '-c', "ps ax | grep 'clickhouse' | grep -v 'grep' | grep -v 'coproc' | grep -v 'bash -c' | awk '{print $1}'"] Command:[docker exec roottestjbodha-gw4-node1-1 bash -c ps ax | grep 'clickhouse' | grep -v 'grep' | grep -v 'coproc' | grep -v 'bash -c' | awk '{print $1}'] http://localhost:None "GET /v1.46/containers/e94fb2c79e395593fb23b35e219fce990ff38008d65dc7fc43aa597bb20acbdb/json HTTP/1.1" 200 None Stderr: Network roottestkeeperbrokenlogs-gw9_default Creating Stderr: Network roottestkeeperbrokenlogs-gw9_default Created Stderr: Container roottestkeeperbrokenlogs-gw9-node2-1 Creating Stderr: Container roottestkeeperbrokenlogs-gw9-node3-1 Creating Stderr: Container roottestkeeperbrokenlogs-gw9-node1-1 Creating Stderr: Container roottestkeeperbrokenlogs-gw9-node3-1 Created Stderr: Container roottestkeeperbrokenlogs-gw9-node2-1 Created Stderr: Container roottestkeeperbrokenlogs-gw9-node1-1 Created Stderr: Container roottestkeeperbrokenlogs-gw9-node1-1 Starting Stderr: Container roottestkeeperbrokenlogs-gw9-node3-1 Starting Stderr: Container roottestkeeperbrokenlogs-gw9-node2-1 Starting Stderr: Container roottestkeeperbrokenlogs-gw9-node1-1 Started Stdout:8 Stderr: Container roottestkeeperbrokenlogs-gw9-node2-1 Started Stderr: Container roottestkeeperbrokenlogs-gw9-node3-1 Started ClickHouse instance created get_instance_ip instance_name=node1 http://localhost:None "GET /v1.46/containers/roottestkeeperbrokenlogs-gw9-node1-1/json HTTP/1.1" 200 None get_instance_ip instance_name=node1 http://localhost:None "GET /v1.46/containers/roottestkeeperbrokenlogs-gw9-node1-1/json HTTP/1.1" 200 None Waiting for ClickHouse start in node1, ip: 172.16.3.2... http://localhost:None "GET /v1.46/containers/roottestkeeperbrokenlogs-gw9-node1-1/json HTTP/1.1" 200 None http://localhost:None "GET /v1.46/containers/db27288f726f4648843dce60ca0e406a75b0132890f036256237008eb5638c97/json HTTP/1.1" 200 None Executing query CREATE TABLE dist AS data Engine=Distributed( insert_distributed_async_send_cluster_two_replicas, currentDatabase(), data, key ) on n1 Executing query ALTER TABLE encrypted_test MOVE PART 'all_1_1_0' TO DISK 'disk_s3_encrypted' on node Executing query select 20 on node1 http://localhost:None "GET /v1.46/containers/e94fb2c79e395593fb23b35e219fce990ff38008d65dc7fc43aa597bb20acbdb/json HTTP/1.1" 200 None http://localhost:None "GET /v1.46/containers/e94fb2c79e395593fb23b35e219fce990ff38008d65dc7fc43aa597bb20acbdb/json HTTP/1.1" 200 None Executing query SYSTEM STOP DISTRIBUTED SENDS dist on n1 http://localhost:None "GET /v1.46/containers/e94fb2c79e395593fb23b35e219fce990ff38008d65dc7fc43aa597bb20acbdb/json HTTP/1.1" 200 None Executing query SELECT * FROM encrypted_test ORDER BY id FORMAT Values on node Executing query CREATE TABLE data (key Int, value String) Engine=MergeTree() ORDER BY key on n2 http://localhost:None "GET /v1.46/containers/e94fb2c79e395593fb23b35e219fce990ff38008d65dc7fc43aa597bb20acbdb/json HTTP/1.1" 200 None http://localhost:None "GET /v1.46/containers/e94fb2c79e395593fb23b35e219fce990ff38008d65dc7fc43aa597bb20acbdb/json HTTP/1.1" 200 None Executing query INSERT INTO encrypted_test VALUES (2,'data'),(3,'data') on node http://localhost:None "GET /v1.46/containers/e94fb2c79e395593fb23b35e219fce990ff38008d65dc7fc43aa597bb20acbdb/json HTTP/1.1" 200 None Executing query CREATE TABLE dist AS data Engine=Distributed( insert_distributed_async_send_cluster_two_replicas, currentDatabase(), data, key ) on n2 http://localhost:None "GET /v1.46/containers/e94fb2c79e395593fb23b35e219fce990ff38008d65dc7fc43aa597bb20acbdb/json HTTP/1.1" 200 None http://localhost:None "GET /v1.46/containers/db27288f726f4648843dce60ca0e406a75b0132890f036256237008eb5638c97/json HTTP/1.1" 200 None Executing query select 20 on node1 Executing query OPTIMIZE TABLE encrypted_test FINAL on node http://localhost:None "GET /v1.46/containers/e94fb2c79e395593fb23b35e219fce990ff38008d65dc7fc43aa597bb20acbdb/json HTTP/1.1" 200 None Executing query SYSTEM STOP DISTRIBUTED SENDS dist on n2 http://localhost:None "GET /v1.46/containers/db27288f726f4648843dce60ca0e406a75b0132890f036256237008eb5638c97/json HTTP/1.1" 200 None run container_id:roottesthedgedrequestsparallel-gw3-node-1 detach:False nothrow:False cmd: ['bash', '-c', "ps ax | grep 'clickhouse' | grep -v 'grep' | grep -v 'coproc' | grep -v 'bash -c' | awk '{print $1}'"] Command:[docker exec roottesthedgedrequestsparallel-gw3-node-1 bash -c ps ax | grep 'clickhouse' | grep -v 'grep' | grep -v 'coproc' | grep -v 'bash -c' | awk '{print $1}'] Stdout:3096 http://localhost:None "GET /v1.46/containers/e94fb2c79e395593fb23b35e219fce990ff38008d65dc7fc43aa597bb20acbdb/json HTTP/1.1" 200 None ClickHouse instance started Executing query CREATE TABLE test (i Int64) ENGINE = MergeTree() ORDER BY i SETTINGS index_granularity = 16 on instance http://localhost:None "GET /v1.46/containers/db27288f726f4648843dce60ca0e406a75b0132890f036256237008eb5638c97/json HTTP/1.1" 200 None run container_id:roottestjbodha-gw4-node1-1 detach:False nothrow:False cmd: ['bash', '-c', "ps ax | grep 'clickhouse' | grep -v 'grep' | grep -v 'coproc' | grep -v 'bash -c' | awk '{print $1}'"] Command:[docker exec roottestjbodha-gw4-node1-1 bash -c ps ax | grep 'clickhouse' | grep -v 'grep' | grep -v 'coproc' | grep -v 'bash -c' | awk '{print $1}'] http://localhost:None "GET /v1.46/containers/db27288f726f4648843dce60ca0e406a75b0132890f036256237008eb5638c97/json HTTP/1.1" 200 None Executing query ALTER TABLE encrypted_test MOVE PART 'all_1_1_0' TO DISK 'disk_s3_encrypted' on node Stdout:8 Executing query CREATE TABLE data (key Int, value String) Engine=MergeTree() ORDER BY key on n3 Executing query INSERT INTO test SELECT number FROM numbers(128) on instance http://localhost:None "GET /v1.46/containers/db27288f726f4648843dce60ca0e406a75b0132890f036256237008eb5638c97/json HTTP/1.1" 200 None http://localhost:None "GET /v1.46/containers/db27288f726f4648843dce60ca0e406a75b0132890f036256237008eb5638c97/json HTTP/1.1" 200 None Executing query SELECT * FROM encrypted_test ORDER BY id FORMAT Values on node Executing query CREATE TABLE dist AS data Engine=Distributed( insert_distributed_async_send_cluster_two_replicas, currentDatabase(), data, key ) on n3 http://localhost:None "GET /v1.46/containers/db27288f726f4648843dce60ca0e406a75b0132890f036256237008eb5638c97/json HTTP/1.1" 200 None Executing query OPTIMIZE TABLE test on instance Executing query select 20 on node1 http://localhost:None "GET /v1.46/containers/db27288f726f4648843dce60ca0e406a75b0132890f036256237008eb5638c97/json HTTP/1.1" 200 None Executing query SYSTEM STOP DISTRIBUTED SENDS dist on n3 Executing query DROP TABLE IF EXISTS encrypted_test SYNC on node [gw1] PASSED test_encrypted_disk/test.py::test_optimize_table[s3_policy-disk_s3_encrypted] http://localhost:None "GET /v1.46/containers/db27288f726f4648843dce60ca0e406a75b0132890f036256237008eb5638c97/json HTTP/1.1" 200 None Executing query SELECT any(database), any(table), count() as parts, sum(rows) as rows, sum(marks)-1 as marks FROM system.parts WHERE database = 'default' AND table = 'test' and active = 1 GROUP BY (database, table) on instance Executing query CREATE TABLE data (key Int, value String) Engine=MergeTree() ORDER BY key on n4 http://localhost:None "GET /v1.46/containers/db27288f726f4648843dce60ca0e406a75b0132890f036256237008eb5638c97/json HTTP/1.1" 200 None http://localhost:None "GET /v1.46/containers/db27288f726f4648843dce60ca0e406a75b0132890f036256237008eb5638c97/json HTTP/1.1" 200 None test_encrypted_disk/test.py::test_part_move[local_policy-destination_disks0] Executing query CREATE TABLE encrypted_test ( id Int64, data String ) ENGINE=MergeTree() ORDER BY id SETTINGS storage_policy='local_policy', temporary_directories_lifetime=1 on node Executing query EXPLAIN ESTIMATE SELECT * FROM test on instance Executing query CREATE TABLE dist AS data Engine=Distributed( insert_distributed_async_send_cluster_two_replicas, currentDatabase(), data, key ) on n4 http://localhost:None "GET /v1.46/containers/db27288f726f4648843dce60ca0e406a75b0132890f036256237008eb5638c97/json HTTP/1.1" 200 None run container_id:roottesthedgedrequestsparallel-gw3-node-1 detach:False nothrow:False cmd: ['bash', '-c', "ps ax | grep 'clickhouse' | grep -v 'grep' | grep -v 'coproc' | grep -v 'bash -c' | awk '{print $1}'"] Command:[docker exec roottesthedgedrequestsparallel-gw3-node-1 bash -c ps ax | grep 'clickhouse' | grep -v 'grep' | grep -v 'coproc' | grep -v 'bash -c' | awk '{print $1}'] Stdout:3096 http://localhost:None "GET /v1.46/containers/db27288f726f4648843dce60ca0e406a75b0132890f036256237008eb5638c97/json HTTP/1.1" 200 None Executing query INSERT INTO encrypted_test VALUES (0,'data'),(1,'data') on node run container_id:roottestjbodha-gw4-node1-1 detach:False nothrow:False cmd: ['bash', '-c', "ps ax | grep 'clickhouse' | grep -v 'grep' | grep -v 'coproc' | grep -v 'bash -c' | awk '{print $1}'"] Command:[docker exec roottestjbodha-gw4-node1-1 bash -c ps ax | grep 'clickhouse' | grep -v 'grep' | grep -v 'coproc' | grep -v 'bash -c' | awk '{print $1}'] Command:[docker compose --env-file /ClickHouse/tests/integration/test_explain_estimates/_instances-0-gw8/.env --project-name roottestexplainestimates-gw8 --file /ClickHouse/tests/integration/test_explain_estimates/_instances-0-gw8/instance/docker-compose.yml stop --timeout 20] [gw8] PASSED test_explain_estimates/test.py::test_explain_estimates http://localhost:None "GET /v1.46/containers/db27288f726f4648843dce60ca0e406a75b0132890f036256237008eb5638c97/json HTTP/1.1" 200 None Executing query SYSTEM STOP DISTRIBUTED SENDS dist on n4 Executing query select 20 on node1 run container_id:roottestjbodha-gw4-node1-1 detach:False nothrow:False cmd: ['bash', '-c', "ps ax | grep 'clickhouse' | grep -v 'grep' | grep -v 'coproc' | grep -v 'bash -c' | awk '{print $1}'"] Command:[docker exec roottestjbodha-gw4-node1-1 bash -c ps ax | grep 'clickhouse' | grep -v 'grep' | grep -v 'coproc' | grep -v 'bash -c' | awk '{print $1}'] No clickhouse process running. Start new one. http://localhost:None "POST /v1.46/containers/roottestjbodha-gw4-node1-1/exec HTTP/1.1" 201 74 http://localhost:None "POST /v1.46/exec/034861970b77d752701650d3ccb750c1ec9b946dfbaf74ddd4662d4e30dbc456/start HTTP/1.1" 200 0 http://localhost:None "GET /v1.46/exec/034861970b77d752701650d3ccb750c1ec9b946dfbaf74ddd4662d4e30dbc456/json HTTP/1.1" 200 586 http://localhost:None "GET /v1.46/containers/db27288f726f4648843dce60ca0e406a75b0132890f036256237008eb5638c97/json HTTP/1.1" 200 None Executing query SELECT * FROM encrypted_test ORDER BY id FORMAT Values on node http://localhost:None "GET /v1.46/containers/db27288f726f4648843dce60ca0e406a75b0132890f036256237008eb5638c97/json HTTP/1.1" 200 None Executing query INSERT INTO dist SELECT number, randomPrintableASCII(100) FROM numbers(10000) on n2 http://localhost:None "GET /v1.46/containers/db27288f726f4648843dce60ca0e406a75b0132890f036256237008eb5638c97/json HTTP/1.1" 200 None Executing query ALTER TABLE encrypted_test MOVE PART 'all_1_1_0' TO DISK 'disk_local_encrypted' on node http://localhost:None "GET /v1.46/containers/db27288f726f4648843dce60ca0e406a75b0132890f036256237008eb5638c97/json HTTP/1.1" 200 None run container_id:roottestinsertdistributedasyncsend-gw0-n2-1 detach:False nothrow:False cmd: ['bash', '-c', 'wc -c < /var/lib/clickhouse/data/default/dist/shard1_replica2/2.bin'] Command:[docker exec roottestinsertdistributedasyncsend-gw0-n2-1 bash -c wc -c < /var/lib/clickhouse/data/default/dist/shard1_replica2/2.bin] http://localhost:None "GET /v1.46/containers/db27288f726f4648843dce60ca0e406a75b0132890f036256237008eb5638c97/json HTTP/1.1" 200 None Stdout:1054718 run container_id:roottestinsertdistributedasyncsend-gw0-n2-1 detach:False nothrow:False cmd: ['bash', '-c', 'mv /var/lib/clickhouse/data/default/dist/shard1_replica2/2.bin /tmp/bin && head -c 1054708 /tmp/bin > /var/lib/clickhouse/data/default/dist/shard1_replica2/2.bin'] Command:[docker exec roottestinsertdistributedasyncsend-gw0-n2-1 bash -c mv /var/lib/clickhouse/data/default/dist/shard1_replica2/2.bin /tmp/bin && head -c 1054708 /tmp/bin > /var/lib/clickhouse/data/default/dist/shard1_replica2/2.bin] Executing query SELECT * FROM encrypted_test ORDER BY id FORMAT Values on node Executing query SYSTEM FLUSH DISTRIBUTED dist on n2 http://localhost:None "GET /v1.46/containers/db27288f726f4648843dce60ca0e406a75b0132890f036256237008eb5638c97/json HTTP/1.1" 200 None ClickHouse node1 started get_instance_ip instance_name=node2 http://localhost:None "GET /v1.46/containers/roottestkeeperbrokenlogs-gw9-node2-1/json HTTP/1.1" 200 None get_instance_ip instance_name=node2 http://localhost:None "GET /v1.46/containers/roottestkeeperbrokenlogs-gw9-node2-1/json HTTP/1.1" 200 None Waiting for ClickHouse start in node2, ip: 172.16.3.4... http://localhost:None "GET /v1.46/containers/roottestkeeperbrokenlogs-gw9-node2-1/json HTTP/1.1" 200 None http://localhost:None "GET /v1.46/containers/db0189af878e6300796e680d43c6f225955618fb2374e3ccd990c53ea9800df3/json HTTP/1.1" 200 None ClickHouse node2 started get_instance_ip instance_name=node3 http://localhost:None "GET /v1.46/containers/roottestkeeperbrokenlogs-gw9-node3-1/json HTTP/1.1" 200 None get_instance_ip instance_name=node3 http://localhost:None "GET /v1.46/containers/roottestkeeperbrokenlogs-gw9-node3-1/json HTTP/1.1" 200 None Waiting for ClickHouse start in node3, ip: 172.16.3.3... http://localhost:None "GET /v1.46/containers/roottestkeeperbrokenlogs-gw9-node3-1/json HTTP/1.1" 200 None http://localhost:None "GET /v1.46/containers/506c99294f385d9b94437a3840c88e84bc52321ffb0a59edbf7d4fa23d9f1eb9/json HTTP/1.1" 200 None ClickHouse node3 started run container_id:roottestkeeperbrokenlogs-gw9-node1-1 detach:False nothrow:True cmd: ['bash', '-c', 'ps -C clickhouse'] Command:[docker exec -u root roottestkeeperbrokenlogs-gw9-node1-1 bash -c ps -C clickhouse] Stdout: PID TTY TIME CMD Stdout: 8 ? 00:00:01 clickhouse run container_id:roottestkeeperbrokenlogs-gw9-node1-1 detach:False nothrow:False cmd: ['bash', '-c', 'pkill clickhouse'] Command:[docker exec -u root roottestkeeperbrokenlogs-gw9-node1-1 bash -c pkill clickhouse] Executing query select 20 on node1 run container_id:roottestkeeperbrokenlogs-gw9-node1-1 detach:False nothrow:False cmd: ['bash', '-c', "ps ax | grep 'clickhouse' | grep -v 'grep' | grep -v 'coproc' | grep -v 'bash -c' | awk '{print $1}'"] Command:[docker exec roottestkeeperbrokenlogs-gw9-node1-1 bash -c ps ax | grep 'clickhouse' | grep -v 'grep' | grep -v 'coproc' | grep -v 'bash -c' | awk '{print $1}'] Stdout:8 Executing query ALTER TABLE encrypted_test MOVE PART 'all_1_1_0' TO DISK 'disk_local_encrypted' on node run container_id:roottesthedgedrequestsparallel-gw3-node-1 detach:False nothrow:False cmd: ['bash', '-c', "ps ax | grep 'clickhouse' | grep -v 'grep' | grep -v 'coproc' | grep -v 'bash -c' | awk '{print $1}'"] Command:[docker exec roottesthedgedrequestsparallel-gw3-node-1 bash -c ps ax | grep 'clickhouse' | grep -v 'grep' | grep -v 'coproc' | grep -v 'bash -c' | awk '{print $1}'] Executing query SYSTEM FLUSH DISTRIBUTED dist on n2 Stdout:3096 Executing query ALTER TABLE encrypted_test MOVE PART 'all_1_1_0' TO DISK 'disk_local_encrypted2' on node run container_id:roottestinsertdistributedasyncsend-gw0-n2-1 detach:False nothrow:False cmd: ['bash', '-c', 'ls /var/lib/clickhouse/data/default/dist/shard1_replica2/broken/2.bin'] Command:[docker exec roottestinsertdistributedasyncsend-gw0-n2-1 bash -c ls /var/lib/clickhouse/data/default/dist/shard1_replica2/broken/2.bin] run container_id:roottestjbodha-gw4-node1-1 detach:False nothrow:False cmd: ['bash', '-c', "ps ax | grep 'clickhouse' | grep -v 'grep' | grep -v 'coproc' | grep -v 'bash -c' | awk '{print $1}'"] Command:[docker exec roottestjbodha-gw4-node1-1 bash -c ps ax | grep 'clickhouse' | grep -v 'grep' | grep -v 'coproc' | grep -v 'bash -c' | awk '{print $1}'] Stdout:/var/lib/clickhouse/data/default/dist/shard1_replica2/broken/2.bin Executing query SELECT count() FROM data on n1 Stdout:795 Clickhouse process running. run container_id:roottestjbodha-gw4-node1-1 detach:False nothrow:False cmd: ['bash', '-c', "ps ax | grep 'clickhouse' | grep -v 'grep' | grep -v 'coproc' | grep -v 'bash -c' | awk '{print $1}'"] Command:[docker exec roottestjbodha-gw4-node1-1 bash -c ps ax | grep 'clickhouse' | grep -v 'grep' | grep -v 'coproc' | grep -v 'bash -c' | awk '{print $1}'] Stdout:795 Executing query select 20 on node1 Executing query SELECT * FROM encrypted_test ORDER BY id FORMAT Values on node Executing query SELECT count() FROM data on n2 run container_id:roottestkeeperincorrectconfig-gw7-node1-1 detach:False nothrow:False cmd: ['bash', '-c', "ps ax | grep 'clickhouse' | grep -v 'grep' | grep -v 'coproc' | grep -v 'bash -c' | awk '{print $1}'"] Command:[docker exec roottestkeeperincorrectconfig-gw7-node1-1 bash -c ps ax | grep 'clickhouse' | grep -v 'grep' | grep -v 'coproc' | grep -v 'bash -c' | awk '{print $1}'] Current start attempt failed. Will kill 802 just in case. run container_id:roottestkeeperincorrectconfig-gw7-node1-1 detach:False nothrow:True cmd: ['bash', '-c', 'kill -9 802'] Command:[docker exec -u root roottestkeeperincorrectconfig-gw7-node1-1 bash -c kill -9 802] Stderr:bash: line 1: kill: (802) - No such process Exitcode:1 Executing query ALTER TABLE encrypted_test MOVE PART 'all_1_1_0' TO DISK 'disk_local_encrypted2' on node [gw0] PASSED test_insert_distributed_async_send/test.py::test_insert_distributed_async_send_truncated_1[0-0] test_insert_distributed_async_send/test.py::test_insert_distributed_async_send_truncated_1[0-1] Executing query DROP TABLE IF EXISTS data on n1 Executing query ALTER TABLE encrypted_test MOVE PART 'all_1_1_0' TO DISK 'disk_local_encrypted_key192b' on node Executing query DROP TABLE IF EXISTS dist on n1 run container_id:roottestkeeperbrokenlogs-gw9-node1-1 detach:False nothrow:False cmd: ['bash', '-c', "ps ax | grep 'clickhouse' | grep -v 'grep' | grep -v 'coproc' | grep -v 'bash -c' | awk '{print $1}'"] Command:[docker exec roottestkeeperbrokenlogs-gw9-node1-1 bash -c ps ax | grep 'clickhouse' | grep -v 'grep' | grep -v 'coproc' | grep -v 'bash -c' | awk '{print $1}'] Stdout:8 Executing query SELECT * FROM encrypted_test ORDER BY id FORMAT Values on node Executing query select 20 on node1 run container_id:roottesthedgedrequestsparallel-gw3-node-1 detach:False nothrow:False cmd: ['bash', '-c', "ps ax | grep 'clickhouse' | grep -v 'grep' | grep -v 'coproc' | grep -v 'bash -c' | awk '{print $1}'"] Command:[docker exec roottesthedgedrequestsparallel-gw3-node-1 bash -c ps ax | grep 'clickhouse' | grep -v 'grep' | grep -v 'coproc' | grep -v 'bash -c' | awk '{print $1}'] run container_id:roottestkeeperincorrectconfig-gw7-node1-1 detach:False nothrow:False cmd: ['bash', '-c', "ps ax | grep 'clickhouse' | grep -v 'grep' | grep -v 'coproc' | grep -v 'bash -c' | awk '{print $1}'"] Command:[docker exec roottestkeeperincorrectconfig-gw7-node1-1 bash -c ps ax | grep 'clickhouse' | grep -v 'grep' | grep -v 'coproc' | grep -v 'bash -c' | awk '{print $1}'] run container_id:roottesthedgedrequestsparallel-gw3-node-1 detach:False nothrow:False cmd: ['bash', '-c', "ps ax | grep 'clickhouse' | grep -v 'grep' | grep -v 'coproc' | grep -v 'bash -c' | awk '{print $1}'"] Command:[docker exec roottesthedgedrequestsparallel-gw3-node-1 bash -c ps ax | grep 'clickhouse' | grep -v 'grep' | grep -v 'coproc' | grep -v 'bash -c' | awk '{print $1}'] Executing query DROP TABLE IF EXISTS data on n2 No clickhouse process running. Start new one. http://localhost:None "POST /v1.46/containers/roottestkeeperincorrectconfig-gw7-node1-1/exec HTTP/1.1" 201 74 No clickhouse process running. Start new one. http://localhost:None "POST /v1.46/containers/roottesthedgedrequestsparallel-gw3-node-1/exec HTTP/1.1" 201 74 http://localhost:None "POST /v1.46/exec/e74a31cb568ff6f9aa3846fd7474d7a1a202359f1fc066ecdc3f98aeb0b5ca9f/start HTTP/1.1" 200 0 http://localhost:None "GET /v1.46/exec/e74a31cb568ff6f9aa3846fd7474d7a1a202359f1fc066ecdc3f98aeb0b5ca9f/json HTTP/1.1" 200 586 http://localhost:None "POST /v1.46/exec/931fdb878218464174ba1deb566ce5cde314263191adced2770d53413c851da4/start HTTP/1.1" 200 0 http://localhost:None "GET /v1.46/exec/931fdb878218464174ba1deb566ce5cde314263191adced2770d53413c851da4/json HTTP/1.1" 200 586 Executing query ALTER TABLE encrypted_test MOVE PART 'all_1_1_0' TO DISK 'disk_local_encrypted_key192b' on node Executing query DROP TABLE IF EXISTS dist on n2 Executing query ALTER TABLE encrypted_test MOVE PART 'all_1_1_0' TO DISK 'disk_local' on node Stderr: Container roottestexplainestimates-gw8-instance-1 Stopping Stderr: Container roottestexplainestimates-gw8-instance-1 Stopped Command:[bash -c [ -f /ClickHouse/tests/integration/test_explain_estimates/_instances-0-gw8/instance/logs/stderr.log ] && zgrep -aH "==================" /ClickHouse/tests/integration/test_explain_estimates/_instances-0-gw8/instance/logs/stderr.log* | ( [ -z "" ] && cat || grep -v "$" ) || true] Command:[docker compose --env-file /ClickHouse/tests/integration/test_explain_estimates/_instances-0-gw8/.env --project-name roottestexplainestimates-gw8 --file /ClickHouse/tests/integration/test_explain_estimates/_instances-0-gw8/instance/docker-compose.yml down --volumes] Executing query DROP TABLE IF EXISTS data on n3 Executing query SELECT * FROM encrypted_test ORDER BY id FORMAT Values on node Executing query DROP TABLE IF EXISTS dist on n3 Executing query ALTER TABLE encrypted_test MOVE PART 'all_1_1_0' TO DISK 'disk_local' on node run container_id:roottestkeeperbrokenlogs-gw9-node1-1 detach:False nothrow:False cmd: ['bash', '-c', "ps ax | grep 'clickhouse' | grep -v 'grep' | grep -v 'coproc' | grep -v 'bash -c' | awk '{print $1}'"] Command:[docker exec roottestkeeperbrokenlogs-gw9-node1-1 bash -c ps ax | grep 'clickhouse' | grep -v 'grep' | grep -v 'coproc' | grep -v 'bash -c' | awk '{print $1}'] Executing query DROP TABLE IF EXISTS data on n4 Executing query SELECT * FROM encrypted_test ORDER BY id FORMAT Values on node Stdout:8 Executing query DROP TABLE IF EXISTS dist on n4 run container_id:roottestkeeperincorrectconfig-gw7-node1-1 detach:False nothrow:False cmd: ['bash', '-c', "ps ax | grep 'clickhouse' | grep -v 'grep' | grep -v 'coproc' | grep -v 'bash -c' | awk '{print $1}'"] Command:[docker exec roottestkeeperincorrectconfig-gw7-node1-1 bash -c ps ax | grep 'clickhouse' | grep -v 'grep' | grep -v 'coproc' | grep -v 'bash -c' | awk '{print $1}'] run container_id:roottesthedgedrequestsparallel-gw3-node-1 detach:False nothrow:False cmd: ['bash', '-c', "ps ax | grep 'clickhouse' | grep -v 'grep' | grep -v 'coproc' | grep -v 'bash -c' | awk '{print $1}'"] Command:[docker exec roottesthedgedrequestsparallel-gw3-node-1 bash -c ps ax | grep 'clickhouse' | grep -v 'grep' | grep -v 'coproc' | grep -v 'bash -c' | awk '{print $1}'] Stderr: Container roottestexplainestimates-gw8-instance-1 Stopping Stderr: Container roottestexplainestimates-gw8-instance-1 Stopped Stderr: Container roottestexplainestimates-gw8-instance-1 Removing Stderr: Container roottestexplainestimates-gw8-instance-1 Removed Stderr: Network roottestexplainestimates-gw8_default Removing Stderr: Network roottestexplainestimates-gw8_default Removed Cleanup called Docker networks for project roottestexplainestimates-gw8 are NETWORK ID NAME DRIVER SCOPE Executing query DROP TABLE IF EXISTS encrypted_test SYNC on node Stdout:1439 Clickhouse process running. run container_id:roottestkeeperincorrectconfig-gw7-node1-1 detach:False nothrow:False cmd: ['bash', '-c', "ps ax | grep 'clickhouse' | grep -v 'grep' | grep -v 'coproc' | grep -v 'bash -c' | awk '{print $1}'"] Command:[docker exec roottestkeeperincorrectconfig-gw7-node1-1 bash -c ps ax | grep 'clickhouse' | grep -v 'grep' | grep -v 'coproc' | grep -v 'bash -c' | awk '{print $1}'] [gw1] PASSED test_encrypted_disk/test.py::test_part_move[local_policy-destination_disks0] Stdout:3887 Clickhouse process running. run container_id:roottesthedgedrequestsparallel-gw3-node-1 detach:False nothrow:False cmd: ['bash', '-c', "ps ax | grep 'clickhouse' | grep -v 'grep' | grep -v 'coproc' | grep -v 'bash -c' | awk '{print $1}'"] Command:[docker exec roottesthedgedrequestsparallel-gw3-node-1 bash -c ps ax | grep 'clickhouse' | grep -v 'grep' | grep -v 'coproc' | grep -v 'bash -c' | awk '{print $1}'] Docker containers for project roottestexplainestimates-gw8 are CONTAINER ID IMAGE COMMAND CREATED STATUS PORTS NAMES Docker volumes for project roottestexplainestimates-gw8 are DRIVER VOLUME NAME Command:[docker container list --all --filter name='^/roottestexplainestimates-gw8-.*-1$' --format '{{.ID}}:{{.Names}}'] Unstopped containers: {} No running containers for project: roottestexplainestimates-gw8 Trying to prune unused networks... Stdout:1439 Executing query select 20 on node1 Stdout:3887 Executing query select 20 on node Trying to prune unused images... Command:[docker image prune -f] Stdout:Total reclaimed space: 0B Images pruned Trying to prune unused volumes... Command:[docker volume ls | wc -l] Stdout:3 Command:[docker volume prune -f] Stdout:Total reclaimed space: 0B Volumes pruned: 3 Executing query CREATE TABLE data (key Int, value String) Engine=MergeTree() ORDER BY key on n1 test_encrypted_disk/test.py::test_part_move[s3_policy-destination_disks1] Executing query CREATE TABLE encrypted_test ( id Int64, data String ) ENGINE=MergeTree() ORDER BY id SETTINGS storage_policy='s3_policy', temporary_directories_lifetime=1 on node Executing query CREATE TABLE dist AS data Engine=Distributed( insert_distributed_async_send_cluster_two_replicas, currentDatabase(), data, key ) on n1 Executing query INSERT INTO encrypted_test VALUES (0,'data'),(1,'data') on node Executing query SYSTEM STOP DISTRIBUTED SENDS dist on n1 Executing query SELECT * FROM encrypted_test ORDER BY id FORMAT Values on node Executing query select 20 on node1 Executing query CREATE TABLE data (key Int, value String) Engine=MergeTree() ORDER BY key on n2 Executing query select 20 on node run container_id:roottestkeeperbrokenlogs-gw9-node1-1 detach:False nothrow:False cmd: ['bash', '-c', "ps ax | grep 'clickhouse' | grep -v 'grep' | grep -v 'coproc' | grep -v 'bash -c' | awk '{print $1}'"] Command:[docker exec roottestkeeperbrokenlogs-gw9-node1-1 bash -c ps ax | grep 'clickhouse' | grep -v 'grep' | grep -v 'coproc' | grep -v 'bash -c' | awk '{print $1}'] Stdout:8 Executing query ALTER TABLE encrypted_test MOVE PART 'all_1_1_0' TO DISK 'disk_s3_encrypted' on node Executing query CREATE TABLE dist AS data Engine=Distributed( insert_distributed_async_send_cluster_two_replicas, currentDatabase(), data, key ) on n2 Executing query SELECT count() FROM distributed on node Executing query SELECT * FROM encrypted_test ORDER BY id FORMAT Values on node Executing query SYSTEM STOP DISTRIBUTED SENDS dist on n2 Executing query ALTER TABLE encrypted_test MOVE PART 'all_1_1_0' TO DISK 'disk_s3_encrypted' on node Executing query CREATE TABLE data (key Int, value String) Engine=MergeTree() ORDER BY key on n3 Executing query select 20 on node1 Executing query SELECT value FROM system.events WHERE event='HedgedRequestsChangeReplica' on node Executing query ALTER TABLE encrypted_test MOVE PART 'all_1_1_0' TO DISK 'disk_s3' on node Executing query CREATE TABLE dist AS data Engine=Distributed( insert_distributed_async_send_cluster_two_replicas, currentDatabase(), data, key ) on n3 Command:[docker compose --env-file /ClickHouse/tests/integration/test_hedged_requests_parallel/_instances-0-gw3/.env --project-name roottesthedgedrequestsparallel-gw3 --file /ClickHouse/tests/integration/test_hedged_requests_parallel/_instances-0-gw3/node/docker-compose.yml --file /ClickHouse/tests/integration/test_hedged_requests_parallel/_instances-0-gw3/node_1/docker-compose.yml --file /ClickHouse/tests/integration/test_hedged_requests_parallel/_instances-0-gw3/node_2/docker-compose.yml --file /ClickHouse/tests/integration/test_hedged_requests_parallel/_instances-0-gw3/node_3/docker-compose.yml --file /ClickHouse/tests/integration/test_hedged_requests_parallel/_instances-0-gw3/node_4/docker-compose.yml stop --timeout 20] [gw3] PASSED test_hedged_requests_parallel/test.py::test_send_table_status_sleep Executing query SYSTEM STOP DISTRIBUTED SENDS dist on n3 Executing query SELECT * FROM encrypted_test ORDER BY id FORMAT Values on node run container_id:roottestkeeperbrokenlogs-gw9-node1-1 detach:False nothrow:False cmd: ['bash', '-c', "ps ax | grep 'clickhouse' | grep -v 'grep' | grep -v 'coproc' | grep -v 'bash -c' | awk '{print $1}'"] Command:[docker exec roottestkeeperbrokenlogs-gw9-node1-1 bash -c ps ax | grep 'clickhouse' | grep -v 'grep' | grep -v 'coproc' | grep -v 'bash -c' | awk '{print $1}'] run container_id:roottestkeeperbrokenlogs-gw9-node2-1 detach:False nothrow:True cmd: ['bash', '-c', 'ps -C clickhouse'] Command:[docker exec -u root roottestkeeperbrokenlogs-gw9-node2-1 bash -c ps -C clickhouse] Stdout: PID TTY TIME CMD Stdout: 8 ? 00:00:01 clickhouse run container_id:roottestkeeperbrokenlogs-gw9-node2-1 detach:False nothrow:False cmd: ['bash', '-c', 'pkill clickhouse'] Command:[docker exec -u root roottestkeeperbrokenlogs-gw9-node2-1 bash -c pkill clickhouse] Executing query CREATE TABLE data (key Int, value String) Engine=MergeTree() ORDER BY key on n4 run container_id:roottestkeeperbrokenlogs-gw9-node2-1 detach:False nothrow:False cmd: ['bash', '-c', "ps ax | grep 'clickhouse' | grep -v 'grep' | grep -v 'coproc' | grep -v 'bash -c' | awk '{print $1}'"] Command:[docker exec roottestkeeperbrokenlogs-gw9-node2-1 bash -c ps ax | grep 'clickhouse' | grep -v 'grep' | grep -v 'coproc' | grep -v 'bash -c' | awk '{print $1}'] Stdout:8 Executing query ALTER TABLE encrypted_test MOVE PART 'all_1_1_0' TO DISK 'disk_s3' on node Executing query select 20 on node1 Executing query CREATE TABLE dist AS data Engine=Distributed( insert_distributed_async_send_cluster_two_replicas, currentDatabase(), data, key ) on n4 Executing query SELECT * FROM encrypted_test ORDER BY id FORMAT Values on node Executing query SYSTEM STOP DISTRIBUTED SENDS dist on n4 Executing query DROP TABLE IF EXISTS encrypted_test SYNC on node [gw1] PASSED test_encrypted_disk/test.py::test_part_move[s3_policy-destination_disks1] Executing query INSERT INTO dist SELECT number, randomPrintableASCII(100) FROM numbers(10000) on n4 test_encrypted_disk/test.py::test_read_in_order Executing query CREATE TABLE encrypted_test(`a` UInt64, `b` String(150)) ENGINE = MergeTree() ORDER BY (a, b) SETTINGS storage_policy='encrypted_policy' on node Executing query select 20 on node1 run container_id:roottestinsertdistributedasyncsend-gw0-n4-1 detach:False nothrow:False cmd: ['bash', '-c', 'wc -c < /var/lib/clickhouse/data/default/dist/shard1_replica2/2.bin'] Command:[docker exec roottestinsertdistributedasyncsend-gw0-n4-1 bash -c wc -c < /var/lib/clickhouse/data/default/dist/shard1_replica2/2.bin] Executing query INSERT INTO encrypted_test SELECT * FROM generateRandom('a UInt64, b FixedString(150)') LIMIT 100000 on node Stdout:1054722 run container_id:roottestinsertdistributedasyncsend-gw0-n4-1 detach:False nothrow:False cmd: ['bash', '-c', 'mv /var/lib/clickhouse/data/default/dist/shard1_replica2/2.bin /tmp/bin && head -c 1054712 /tmp/bin > /var/lib/clickhouse/data/default/dist/shard1_replica2/2.bin'] Command:[docker exec roottestinsertdistributedasyncsend-gw0-n4-1 bash -c mv /var/lib/clickhouse/data/default/dist/shard1_replica2/2.bin /tmp/bin && head -c 1054712 /tmp/bin > /var/lib/clickhouse/data/default/dist/shard1_replica2/2.bin] Executing query SYSTEM FLUSH DISTRIBUTED dist on n4 run container_id:roottestkeeperbrokenlogs-gw9-node2-1 detach:False nothrow:False cmd: ['bash', '-c', "ps ax | grep 'clickhouse' | grep -v 'grep' | grep -v 'coproc' | grep -v 'bash -c' | awk '{print $1}'"] Command:[docker exec roottestkeeperbrokenlogs-gw9-node2-1 bash -c ps ax | grep 'clickhouse' | grep -v 'grep' | grep -v 'coproc' | grep -v 'bash -c' | awk '{print $1}'] Stdout:8 Executing query SELECT * FROM encrypted_test ORDER BY a, b SETTINGS optimize_read_in_order=1 FORMAT Null on node Executing query SYSTEM FLUSH DISTRIBUTED dist on n4 Executing query select 20 on node1 run container_id:roottestinsertdistributedasyncsend-gw0-n4-1 detach:False nothrow:False cmd: ['bash', '-c', 'ls /var/lib/clickhouse/data/default/dist/shard1_replica2/broken/2.bin'] Command:[docker exec roottestinsertdistributedasyncsend-gw0-n4-1 bash -c ls /var/lib/clickhouse/data/default/dist/shard1_replica2/broken/2.bin] Executing query SELECT * FROM encrypted_test ORDER BY a, b SETTINGS optimize_read_in_order=0 FORMAT Null on node Stdout:/var/lib/clickhouse/data/default/dist/shard1_replica2/broken/2.bin Executing query SELECT count() FROM data on n3 Executing query SELECT count() FROM data on n4 Executing query DROP TABLE IF EXISTS encrypted_test SYNC on node [gw1] PASSED test_encrypted_disk/test.py::test_read_in_order [gw0] PASSED test_insert_distributed_async_send/test.py::test_insert_distributed_async_send_truncated_1[0-1] test_insert_distributed_async_send/test.py::test_insert_distributed_async_send_truncated_1[1-0] Executing query DROP TABLE IF EXISTS data on n1 Executing query select total_space from system.disks where name = 'jbod1' on node1 run container_id:roottestkeeperbrokenlogs-gw9-node2-1 detach:False nothrow:False cmd: ['bash', '-c', "ps ax | grep 'clickhouse' | grep -v 'grep' | grep -v 'coproc' | grep -v 'bash -c' | awk '{print $1}'"] Command:[docker exec roottestkeeperbrokenlogs-gw9-node2-1 bash -c ps ax | grep 'clickhouse' | grep -v 'grep' | grep -v 'coproc' | grep -v 'bash -c' | awk '{print $1}'] Executing query select 20 on node1 Stdout:8 test_encrypted_disk/test.py::test_restart Executing query DROP TABLE IF EXISTS encrypted_test; CREATE TABLE encrypted_test ( id Int64, data String ) ENGINE=MergeTree() ORDER BY id SETTINGS disk='disk_s3_encrypted_default_path' on node Executing query DROP TABLE IF EXISTS dist on n1 Executing query DROP TABLE IF EXISTS tbl SYNC on node1 Executing query INSERT INTO encrypted_test VALUES (0,'data'),(1,'data') on node Executing query DROP TABLE IF EXISTS data on n2 Executing query DROP TABLE IF EXISTS tbl SYNC on node2 Executing query DROP TABLE IF EXISTS dist on n2 Executing query SELECT * FROM encrypted_test ORDER BY id FORMAT Values on node Command:[docker compose --env-file /ClickHouse/tests/integration/test_jbod_ha/_instances-0-gw4/.env --project-name roottestjbodha-gw4 --file /ClickHouse/tests/integration/test_jbod_ha/_instances-0-gw4/node1/docker-compose.yml --file /ClickHouse/tests/integration/helpers/../../../tests/integration/compose/docker_compose_keeper.yml --file /ClickHouse/tests/integration/test_jbod_ha/_instances-0-gw4/node2/docker-compose.yml stop --timeout 20] [gw4] PASSED test_jbod_ha/test.py::test_jbod_ha Executing query DROP TABLE IF EXISTS data on n3 Executing query select 20 on node1 run container_id:roottestencrypteddisk-gw1-node-1 detach:False nothrow:True cmd: ['bash', '-c', 'ps -C clickhouse'] Command:[docker exec -u root roottestencrypteddisk-gw1-node-1 bash -c ps -C clickhouse] Stdout: PID TTY TIME CMD Stdout: 8 ? 00:00:16 clickhouse run container_id:roottestencrypteddisk-gw1-node-1 detach:False nothrow:False cmd: ['bash', '-c', 'pkill clickhouse'] Command:[docker exec -u root roottestencrypteddisk-gw1-node-1 bash -c pkill clickhouse] run container_id:roottestencrypteddisk-gw1-node-1 detach:False nothrow:False cmd: ['bash', '-c', "ps ax | grep 'clickhouse' | grep -v 'grep' | grep -v 'coproc' | grep -v 'bash -c' | awk '{print $1}'"] Command:[docker exec roottestencrypteddisk-gw1-node-1 bash -c ps ax | grep 'clickhouse' | grep -v 'grep' | grep -v 'coproc' | grep -v 'bash -c' | awk '{print $1}'] Stdout:8 Executing query DROP TABLE IF EXISTS dist on n3 run container_id:roottestkeeperbrokenlogs-gw9-node2-1 detach:False nothrow:False cmd: ['bash', '-c', "ps ax | grep 'clickhouse' | grep -v 'grep' | grep -v 'coproc' | grep -v 'bash -c' | awk '{print $1}'"] Command:[docker exec roottestkeeperbrokenlogs-gw9-node2-1 bash -c ps ax | grep 'clickhouse' | grep -v 'grep' | grep -v 'coproc' | grep -v 'bash -c' | awk '{print $1}'] run container_id:roottestkeeperbrokenlogs-gw9-node3-1 detach:False nothrow:True cmd: ['bash', '-c', 'ps -C clickhouse'] Command:[docker exec -u root roottestkeeperbrokenlogs-gw9-node3-1 bash -c ps -C clickhouse] Stdout: PID TTY TIME CMD Stdout: 8 ? 00:00:02 clickhouse run container_id:roottestkeeperbrokenlogs-gw9-node3-1 detach:False nothrow:False cmd: ['bash', '-c', 'pkill clickhouse'] Command:[docker exec -u root roottestkeeperbrokenlogs-gw9-node3-1 bash -c pkill clickhouse] run container_id:roottestkeeperbrokenlogs-gw9-node3-1 detach:False nothrow:False cmd: ['bash', '-c', "ps ax | grep 'clickhouse' | grep -v 'grep' | grep -v 'coproc' | grep -v 'bash -c' | awk '{print $1}'"] Command:[docker exec roottestkeeperbrokenlogs-gw9-node3-1 bash -c ps ax | grep 'clickhouse' | grep -v 'grep' | grep -v 'coproc' | grep -v 'bash -c' | awk '{print $1}'] Stdout:8 Executing query DROP TABLE IF EXISTS data on n4 Executing query DROP TABLE IF EXISTS dist on n4 Executing query select 20 on node1 Executing query CREATE TABLE data (key Int, value String) Engine=MergeTree() ORDER BY key on n1 run container_id:roottestencrypteddisk-gw1-node-1 detach:False nothrow:False cmd: ['bash', '-c', "ps ax | grep 'clickhouse' | grep -v 'grep' | grep -v 'coproc' | grep -v 'bash -c' | awk '{print $1}'"] Command:[docker exec roottestencrypteddisk-gw1-node-1 bash -c ps ax | grep 'clickhouse' | grep -v 'grep' | grep -v 'coproc' | grep -v 'bash -c' | awk '{print $1}'] Stdout:8 Executing query CREATE TABLE dist AS data Engine=Distributed( insert_distributed_async_send_cluster_two_replicas, currentDatabase(), data, key ) on n1 Executing query select 20 on node1 run container_id:roottestkeeperbrokenlogs-gw9-node3-1 detach:False nothrow:False cmd: ['bash', '-c', "ps ax | grep 'clickhouse' | grep -v 'grep' | grep -v 'coproc' | grep -v 'bash -c' | awk '{print $1}'"] Command:[docker exec roottestkeeperbrokenlogs-gw9-node3-1 bash -c ps ax | grep 'clickhouse' | grep -v 'grep' | grep -v 'coproc' | grep -v 'bash -c' | awk '{print $1}'] Stderr: Container roottestjbodha-gw4-node1-1 Stopping Stderr: Container roottestjbodha-gw4-node2-1 Stopping Stderr: Container roottestjbodha-gw4-node1-1 Stopped Stderr: Container roottestjbodha-gw4-node2-1 Stopped Stderr: Container roottestjbodha-gw4-zoo2-1 Stopping Stderr: Container roottestjbodha-gw4-zoo3-1 Stopping Stderr: Container roottestjbodha-gw4-zoo1-1 Stopping Stderr: Container roottestjbodha-gw4-zoo2-1 Stopped Stderr: Container roottestjbodha-gw4-zoo1-1 Stopped Stderr: Container roottestjbodha-gw4-zoo3-1 Stopped Command:[bash -c [ -f /ClickHouse/tests/integration/test_jbod_ha/_instances-0-gw4/node1/logs/stderr.log ] && zgrep -aH "==================" /ClickHouse/tests/integration/test_jbod_ha/_instances-0-gw4/node1/logs/stderr.log* | ( [ -z "" ] && cat || grep -v "$" ) || true] Command:[bash -c [ -f /ClickHouse/tests/integration/test_jbod_ha/_instances-0-gw4/node2/logs/stderr.log ] && zgrep -aH "==================" /ClickHouse/tests/integration/test_jbod_ha/_instances-0-gw4/node2/logs/stderr.log* | ( [ -z "" ] && cat || grep -v "$" ) || true] Command:[docker compose --env-file /ClickHouse/tests/integration/test_jbod_ha/_instances-0-gw4/.env --project-name roottestjbodha-gw4 --file /ClickHouse/tests/integration/test_jbod_ha/_instances-0-gw4/node1/docker-compose.yml --file /ClickHouse/tests/integration/helpers/../../../tests/integration/compose/docker_compose_keeper.yml --file /ClickHouse/tests/integration/test_jbod_ha/_instances-0-gw4/node2/docker-compose.yml down --volumes] Stdout:8 Executing query SYSTEM STOP DISTRIBUTED SENDS dist on n1 Executing query CREATE TABLE data (key Int, value String) Engine=MergeTree() ORDER BY key on n2 Executing query CREATE TABLE dist AS data Engine=Distributed( insert_distributed_async_send_cluster_two_replicas, currentDatabase(), data, key ) on n2 run container_id:roottestkeeperincorrectconfig-gw7-node1-1 detach:False nothrow:False cmd: ['bash', '-c', "ps ax | grep 'clickhouse' | grep -v 'grep' | grep -v 'coproc' | grep -v 'bash -c' | awk '{print $1}'"] Command:[docker exec roottestkeeperincorrectconfig-gw7-node1-1 bash -c ps ax | grep 'clickhouse' | grep -v 'grep' | grep -v 'coproc' | grep -v 'bash -c' | awk '{print $1}'] Stderr: Container roottestjbodha-gw4-node2-1 Stopping Stderr: Container roottestjbodha-gw4-node1-1 Stopping Stderr: Container roottestjbodha-gw4-node2-1 Stopped Stderr: Container roottestjbodha-gw4-node2-1 Removing Stderr: Container roottestjbodha-gw4-node1-1 Stopped Stderr: Container roottestjbodha-gw4-node1-1 Removing Stderr: Container roottestjbodha-gw4-node1-1 Removed Stderr: Container roottestjbodha-gw4-node2-1 Removed Stderr: Container roottestjbodha-gw4-zoo3-1 Stopping Stderr: Container roottestjbodha-gw4-zoo1-1 Stopping Stderr: Container roottestjbodha-gw4-zoo2-1 Stopping Stderr: Container roottestjbodha-gw4-zoo1-1 Stopped Stderr: Container roottestjbodha-gw4-zoo1-1 Removing Stderr: Container roottestjbodha-gw4-zoo3-1 Stopped Stderr: Container roottestjbodha-gw4-zoo3-1 Removing Stderr: Container roottestjbodha-gw4-zoo2-1 Stopped Stderr: Container roottestjbodha-gw4-zoo2-1 Removing Stderr: Container roottestjbodha-gw4-zoo1-1 Removed Stderr: Container roottestjbodha-gw4-zoo3-1 Removed Stderr: Container roottestjbodha-gw4-zoo2-1 Removed Stderr: Network roottestjbodha-gw4_default Removing Stderr: Network roottestjbodha-gw4_default Removed Cleanup called Docker networks for project roottestjbodha-gw4 are NETWORK ID NAME DRIVER SCOPE Current start attempt failed. Will kill 1439 just in case. run container_id:roottestkeeperincorrectconfig-gw7-node1-1 detach:False nothrow:True cmd: ['bash', '-c', 'kill -9 1439'] Command:[docker exec -u root roottestkeeperincorrectconfig-gw7-node1-1 bash -c kill -9 1439] Docker containers for project roottestjbodha-gw4 are CONTAINER ID IMAGE COMMAND CREATED STATUS PORTS NAMES Docker volumes for project roottestjbodha-gw4 are DRIVER VOLUME NAME Command:[docker container list --all --filter name='^/roottestjbodha-gw4-.*-1$' --format '{{.ID}}:{{.Names}}'] Unstopped containers: {} No running containers for project: roottestjbodha-gw4 Trying to prune unused networks... Stderr:bash: line 1: kill: (1439) - No such process Exitcode:1 Trying to prune unused images... Command:[docker image prune -f] Stdout:Total reclaimed space: 0B Images pruned Trying to prune unused volumes... Command:[docker volume ls | wc -l] run container_id:roottestencrypteddisk-gw1-node-1 detach:False nothrow:False cmd: ['bash', '-c', "ps ax | grep 'clickhouse' | grep -v 'grep' | grep -v 'coproc' | grep -v 'bash -c' | awk '{print $1}'"] Command:[docker exec roottestencrypteddisk-gw1-node-1 bash -c ps ax | grep 'clickhouse' | grep -v 'grep' | grep -v 'coproc' | grep -v 'bash -c' | awk '{print $1}'] Stdout:3 Command:[docker volume prune -f] Stdout:Total reclaimed space: 0B Volumes pruned: 3 test_keeper_availability_zone/test.py::test_get_availability_zone Running tests in /ClickHouse/tests/integration/test_keeper_availability_zone/test.py Cluster start called. is_up=False Executing query SYSTEM STOP DISTRIBUTED SENDS dist on n2 Docker networks for project roottestkeeperavailabilityzone-gw4 are NETWORK ID NAME DRIVER SCOPE Stdout:8 Docker containers for project roottestkeeperavailabilityzone-gw4 are CONTAINER ID IMAGE COMMAND CREATED STATUS PORTS NAMES Docker volumes for project roottestkeeperavailabilityzone-gw4 are DRIVER VOLUME NAME Cleanup called Docker networks for project roottestkeeperavailabilityzone-gw4 are NETWORK ID NAME DRIVER SCOPE Docker containers for project roottestkeeperavailabilityzone-gw4 are CONTAINER ID IMAGE COMMAND CREATED STATUS PORTS NAMES Docker volumes for project roottestkeeperavailabilityzone-gw4 are DRIVER VOLUME NAME Command:[docker container list --all --filter name='^/roottestkeeperavailabilityzone-gw4-.*-1$' --format '{{.ID}}:{{.Names}}'] Unstopped containers: {} No running containers for project: roottestkeeperavailabilityzone-gw4 Trying to prune unused networks... Trying to prune unused images... Command:[docker image prune -f] Stdout:Total reclaimed space: 0B Images pruned Trying to prune unused volumes... Command:[docker volume ls | wc -l] Stdout:3 Command:[docker volume prune -f] Stdout:Total reclaimed space: 0B Volumes pruned: 3 Setup directory for instance: node Create directory for configuration generated in this helper Create directory for common tests configuration Copy common configuration from helpers Generate and write macros file Copy custom test config files ['/ClickHouse/tests/integration/test_keeper_availability_zone/configs/keeper_config.xml'] to /ClickHouse/tests/integration/test_keeper_availability_zone/_instances-0-gw4/node/configs/config.d Setup database dir /ClickHouse/tests/integration/test_keeper_availability_zone/_instances-0-gw4/node/database Setup logs dir /ClickHouse/tests/integration/test_keeper_availability_zone/_instances-0-gw4/node/logs Entrypoint cmd: bash -c "trap 'pkill tail' INT TERM; clickhouse server --config-file=/etc/clickhouse-server/config.xml --log-file=/var/log/clickhouse-server/clickhouse-server.log --errorlog-file=/var/log/clickhouse-server/clickhouse-server.err.log --daemon -- ; coproc tail -f /dev/null; wait $$!" Env {'ASAN_OPTIONS': 'use_sigaltstack=0', 'TSAN_OPTIONS': 'use_sigaltstack=0', 'CLICKHOUSE_WATCHDOG_ENABLE': '0', 'CLICKHOUSE_NATS_TLS_SECURE': '0', 'LLVM_PROFILE_FILE': '/var/lib/clickhouse/server_%h_%p_%m.profraw', 'keeper_binary': '/clickhouse', 'keeper_cmd_prefix': 'clickhouse keeper', 'image': 'altinityinfra/integration-test:8b2301119731', 'user': '0', 'keeper_fs': 'bind', 'keeper_logs_dir1': '/ClickHouse/tests/integration/test_keeper_availability_zone/_instances-0-gw4/keeper1/log', 'keeper_config_dir1': '/ClickHouse/tests/integration/test_keeper_availability_zone/_instances-0-gw4/keeper1/config', 'keeper_db_dir1': '/ClickHouse/tests/integration/test_keeper_availability_zone/_instances-0-gw4/keeper1/coordination', 'keeper_logs_dir2': '/ClickHouse/tests/integration/test_keeper_availability_zone/_instances-0-gw4/keeper2/log', 'keeper_config_dir2': '/ClickHouse/tests/integration/test_keeper_availability_zone/_instances-0-gw4/keeper2/config', 'keeper_db_dir2': '/ClickHouse/tests/integration/test_keeper_availability_zone/_instances-0-gw4/keeper2/coordination', 'keeper_logs_dir3': '/ClickHouse/tests/integration/test_keeper_availability_zone/_instances-0-gw4/keeper3/log', 'keeper_config_dir3': '/ClickHouse/tests/integration/test_keeper_availability_zone/_instances-0-gw4/keeper3/config', 'keeper_db_dir3': '/ClickHouse/tests/integration/test_keeper_availability_zone/_instances-0-gw4/keeper3/coordination'} stored in /ClickHouse/tests/integration/test_keeper_availability_zone/_instances-0-gw4/.env Trying paths: ['/root/.docker/config.json', '/root/.dockercfg'] No config file found Trying paths: ['/root/.docker/config.json', '/root/.dockercfg'] No config file found http://localhost:None "GET /version HTTP/1.1" 200 826 Command:[docker compose --env-file /ClickHouse/tests/integration/test_keeper_availability_zone/_instances-0-gw4/.env --project-name roottestkeeperavailabilityzone-gw4 --file /ClickHouse/tests/integration/test_keeper_availability_zone/_instances-0-gw4/node/docker-compose.yml --file /ClickHouse/tests/integration/helpers/../../../tests/integration/compose/docker_compose_keeper.yml pull] run container_id:roottestkeeperbrokenlogs-gw9-node3-1 detach:False nothrow:False cmd: ['bash', '-c', "ps ax | grep 'clickhouse' | grep -v 'grep' | grep -v 'coproc' | grep -v 'bash -c' | awk '{print $1}'"] Command:[docker exec roottestkeeperbrokenlogs-gw9-node3-1 bash -c ps ax | grep 'clickhouse' | grep -v 'grep' | grep -v 'coproc' | grep -v 'bash -c' | awk '{print $1}'] Stdout:8 Stdout:745 run container_id:roottestkeeperincorrectconfig-gw7-node1-1 detach:False nothrow:False cmd: ['bash', '-c', "echo '\n\n \n 9181\n 1\n /var/lib/clickhouse/coordination/log\n /var/lib/clickhouse/coordination/snapshots\n\n \n 5000\n 10000\n trace\n \n\n \n \n 1\n node1\n 9234\n \n \n 1\n node2\n 9234\n \n \n \n\n' > /etc/clickhouse-server/config.d/enable_keeper1.xml"] Command:[docker exec roottestkeeperincorrectconfig-gw7-node1-1 bash -c echo ' 9181 1 /var/lib/clickhouse/coordination/log /var/lib/clickhouse/coordination/snapshots 5000 10000 trace 1 node1 9234 1 node2 9234 ' > /etc/clickhouse-server/config.d/enable_keeper1.xml] run container_id:roottestkeeperincorrectconfig-gw7-node1-1 detach:False nothrow:False cmd: ['bash', '-c', "ps ax | grep 'clickhouse' | grep -v 'grep' | grep -v 'coproc' | grep -v 'bash -c' | awk '{print $1}'"] Command:[docker exec roottestkeeperincorrectconfig-gw7-node1-1 bash -c ps ax | grep 'clickhouse' | grep -v 'grep' | grep -v 'coproc' | grep -v 'bash -c' | awk '{print $1}'] Executing query CREATE TABLE data (key Int, value String) Engine=MergeTree() ORDER BY key on n3 Stderr: Container roottesthedgedrequestsparallel-gw3-node_2-1 Stopping Stderr: Container roottesthedgedrequestsparallel-gw3-node_4-1 Stopping Stderr: Container roottesthedgedrequestsparallel-gw3-node-1 Stopping Stderr: Container roottesthedgedrequestsparallel-gw3-node_3-1 Stopping Stderr: Container roottesthedgedrequestsparallel-gw3-node_1-1 Stopping Stderr: Container roottesthedgedrequestsparallel-gw3-node-1 Stopped Stderr: Container roottesthedgedrequestsparallel-gw3-node_3-1 Stopped Stderr: Container roottesthedgedrequestsparallel-gw3-node_4-1 Stopped Stderr: Container roottesthedgedrequestsparallel-gw3-node_2-1 Stopped Stderr: Container roottesthedgedrequestsparallel-gw3-node_1-1 Stopped Command:[bash -c [ -f /ClickHouse/tests/integration/test_hedged_requests_parallel/_instances-0-gw3/node/logs/stderr.log ] && zgrep -aH "==================" /ClickHouse/tests/integration/test_hedged_requests_parallel/_instances-0-gw3/node/logs/stderr.log* | ( [ -z "" ] && cat || grep -v "$" ) || true] Command:[bash -c [ -f /ClickHouse/tests/integration/test_hedged_requests_parallel/_instances-0-gw3/node_1/logs/stderr.log ] && zgrep -aH "==================" /ClickHouse/tests/integration/test_hedged_requests_parallel/_instances-0-gw3/node_1/logs/stderr.log* | ( [ -z "" ] && cat || grep -v "$" ) || true] Command:[bash -c [ -f /ClickHouse/tests/integration/test_hedged_requests_parallel/_instances-0-gw3/node_2/logs/stderr.log ] && zgrep -aH "==================" /ClickHouse/tests/integration/test_hedged_requests_parallel/_instances-0-gw3/node_2/logs/stderr.log* | ( [ -z "" ] && cat || grep -v "$" ) || true] Command:[bash -c [ -f /ClickHouse/tests/integration/test_hedged_requests_parallel/_instances-0-gw3/node_3/logs/stderr.log ] && zgrep -aH "==================" /ClickHouse/tests/integration/test_hedged_requests_parallel/_instances-0-gw3/node_3/logs/stderr.log* | ( [ -z "" ] && cat || grep -v "$" ) || true] No clickhouse process running. Start new one. http://localhost:None "POST /v1.46/containers/roottestkeeperincorrectconfig-gw7-node1-1/exec HTTP/1.1" 201 74 Command:[bash -c [ -f /ClickHouse/tests/integration/test_hedged_requests_parallel/_instances-0-gw3/node_4/logs/stderr.log ] && zgrep -aH "==================" /ClickHouse/tests/integration/test_hedged_requests_parallel/_instances-0-gw3/node_4/logs/stderr.log* | ( [ -z "" ] && cat || grep -v "$" ) || true] Command:[docker compose --env-file /ClickHouse/tests/integration/test_hedged_requests_parallel/_instances-0-gw3/.env --project-name roottesthedgedrequestsparallel-gw3 --file /ClickHouse/tests/integration/test_hedged_requests_parallel/_instances-0-gw3/node/docker-compose.yml --file /ClickHouse/tests/integration/test_hedged_requests_parallel/_instances-0-gw3/node_1/docker-compose.yml --file /ClickHouse/tests/integration/test_hedged_requests_parallel/_instances-0-gw3/node_2/docker-compose.yml --file /ClickHouse/tests/integration/test_hedged_requests_parallel/_instances-0-gw3/node_3/docker-compose.yml --file /ClickHouse/tests/integration/test_hedged_requests_parallel/_instances-0-gw3/node_4/docker-compose.yml down --volumes] http://localhost:None "POST /v1.46/exec/b1c4c1c3461d364e52937443a2a3b5e6a4c22830ddcf593d3853bf4acfbe4bc1/start HTTP/1.1" 200 0 http://localhost:None "GET /v1.46/exec/b1c4c1c3461d364e52937443a2a3b5e6a4c22830ddcf593d3853bf4acfbe4bc1/json HTTP/1.1" 200 586 Executing query CREATE TABLE dist AS data Engine=Distributed( insert_distributed_async_send_cluster_two_replicas, currentDatabase(), data, key ) on n3 Executing query SYSTEM STOP DISTRIBUTED SENDS dist on n3 run container_id:roottestencrypteddisk-gw1-node-1 detach:False nothrow:False cmd: ['bash', '-c', "ps ax | grep 'clickhouse' | grep -v 'grep' | grep -v 'coproc' | grep -v 'bash -c' | awk '{print $1}'"] Command:[docker exec roottestencrypteddisk-gw1-node-1 bash -c ps ax | grep 'clickhouse' | grep -v 'grep' | grep -v 'coproc' | grep -v 'bash -c' | awk '{print $1}'] Executing query CREATE TABLE data (key Int, value String) Engine=MergeTree() ORDER BY key on n4 run container_id:roottestencrypteddisk-gw1-node-1 detach:False nothrow:False cmd: ['bash', '-c', "ps ax | grep 'clickhouse' | grep -v 'grep' | grep -v 'coproc' | grep -v 'bash -c' | awk '{print $1}'"] Command:[docker exec roottestencrypteddisk-gw1-node-1 bash -c ps ax | grep 'clickhouse' | grep -v 'grep' | grep -v 'coproc' | grep -v 'bash -c' | awk '{print $1}'] No clickhouse process running. Start new one. http://localhost:None "POST /v1.46/containers/roottestencrypteddisk-gw1-node-1/exec HTTP/1.1" 201 74 Stderr: Container roottesthedgedrequestsparallel-gw3-node_3-1 Stopping Stderr: Container roottesthedgedrequestsparallel-gw3-node-1 Stopping Stderr: Container roottesthedgedrequestsparallel-gw3-node_1-1 Stopping Stderr: Container roottesthedgedrequestsparallel-gw3-node_2-1 Stopping Stderr: Container roottesthedgedrequestsparallel-gw3-node_4-1 Stopping Stderr: Container roottesthedgedrequestsparallel-gw3-node_3-1 Stopped Stderr: Container roottesthedgedrequestsparallel-gw3-node_3-1 Removing Stderr: Container roottesthedgedrequestsparallel-gw3-node_1-1 Stopped Stderr: Container roottesthedgedrequestsparallel-gw3-node_4-1 Stopped Stderr: Container roottesthedgedrequestsparallel-gw3-node_1-1 Removing Stderr: Container roottesthedgedrequestsparallel-gw3-node-1 Stopped Stderr: Container roottesthedgedrequestsparallel-gw3-node-1 Removing Stderr: Container roottesthedgedrequestsparallel-gw3-node_4-1 Removing Stderr: Container roottesthedgedrequestsparallel-gw3-node_2-1 Stopped Stderr: Container roottesthedgedrequestsparallel-gw3-node_2-1 Removing Stderr: Container roottesthedgedrequestsparallel-gw3-node_1-1 Removed Stderr: Container roottesthedgedrequestsparallel-gw3-node_3-1 Removed Stderr: Container roottesthedgedrequestsparallel-gw3-node_4-1 Removed Stderr: Container roottesthedgedrequestsparallel-gw3-node-1 Removed Stderr: Container roottesthedgedrequestsparallel-gw3-node_2-1 Removed Stderr: Network roottesthedgedrequestsparallel-gw3_default Removing Stderr: Network roottesthedgedrequestsparallel-gw3_default Removed Cleanup called http://localhost:None "POST /v1.46/exec/071143d028706bf722201bb85174aa1d849b03434e499effb501511a05ce5a8e/start HTTP/1.1" 200 0 http://localhost:None "GET /v1.46/exec/071143d028706bf722201bb85174aa1d849b03434e499effb501511a05ce5a8e/json HTTP/1.1" 200 586 Docker networks for project roottesthedgedrequestsparallel-gw3 are NETWORK ID NAME DRIVER SCOPE Docker containers for project roottesthedgedrequestsparallel-gw3 are CONTAINER ID IMAGE COMMAND CREATED STATUS PORTS NAMES Docker volumes for project roottesthedgedrequestsparallel-gw3 are DRIVER VOLUME NAME Command:[docker container list --all --filter name='^/roottesthedgedrequestsparallel-gw3-.*-1$' --format '{{.ID}}:{{.Names}}'] Unstopped containers: {} No running containers for project: roottesthedgedrequestsparallel-gw3 Trying to prune unused networks... Trying to prune unused images... Command:[docker image prune -f] Stdout:Total reclaimed space: 0B Images pruned Trying to prune unused volumes... Command:[docker volume ls | wc -l] Stdout:3 Command:[docker volume prune -f] Executing query CREATE TABLE dist AS data Engine=Distributed( insert_distributed_async_send_cluster_two_replicas, currentDatabase(), data, key ) on n4 Stdout:Total reclaimed space: 0B Volumes pruned: 3 run container_id:roottestkeeperbrokenlogs-gw9-node3-1 detach:False nothrow:False cmd: ['bash', '-c', "ps ax | grep 'clickhouse' | grep -v 'grep' | grep -v 'coproc' | grep -v 'bash -c' | awk '{print $1}'"] Command:[docker exec roottestkeeperbrokenlogs-gw9-node3-1 bash -c ps ax | grep 'clickhouse' | grep -v 'grep' | grep -v 'coproc' | grep -v 'bash -c' | awk '{print $1}'] run container_id:roottestkeeperbrokenlogs-gw9-node1-1 detach:False nothrow:False cmd: ['rm', '-rf', '/var/lib/clickhouse/coordination/log'] Command:[docker exec roottestkeeperbrokenlogs-gw9-node1-1 rm -rf /var/lib/clickhouse/coordination/log] run container_id:roottestkeeperbrokenlogs-gw9-node1-1 detach:False nothrow:False cmd: ['rm', '-rf', '/var/lib/clickhouse/coordination/snapshots'] Command:[docker exec roottestkeeperbrokenlogs-gw9-node1-1 rm -rf /var/lib/clickhouse/coordination/snapshots] run container_id:roottestkeeperbrokenlogs-gw9-node2-1 detach:False nothrow:False cmd: ['rm', '-rf', '/var/lib/clickhouse/coordination/log'] Command:[docker exec roottestkeeperbrokenlogs-gw9-node2-1 rm -rf /var/lib/clickhouse/coordination/log] run container_id:roottestkeeperbrokenlogs-gw9-node1-1 detach:False nothrow:False cmd: ['bash', '-c', "ps ax | grep 'clickhouse' | grep -v 'grep' | grep -v 'coproc' | grep -v 'bash -c' | awk '{print $1}'"] Command:[docker exec roottestkeeperbrokenlogs-gw9-node1-1 bash -c ps ax | grep 'clickhouse' | grep -v 'grep' | grep -v 'coproc' | grep -v 'bash -c' | awk '{print $1}'] Executing query SYSTEM STOP DISTRIBUTED SENDS dist on n4 run container_id:roottestkeeperbrokenlogs-gw9-node2-1 detach:False nothrow:False cmd: ['rm', '-rf', '/var/lib/clickhouse/coordination/snapshots'] Command:[docker exec roottestkeeperbrokenlogs-gw9-node2-1 rm -rf /var/lib/clickhouse/coordination/snapshots] No clickhouse process running. Start new one. http://localhost:None "POST /v1.46/containers/roottestkeeperbrokenlogs-gw9-node1-1/exec HTTP/1.1" 201 74 run container_id:roottestkeeperincorrectconfig-gw7-node1-1 detach:False nothrow:False cmd: ['bash', '-c', "ps ax | grep 'clickhouse' | grep -v 'grep' | grep -v 'coproc' | grep -v 'bash -c' | awk '{print $1}'"] Command:[docker exec roottestkeeperincorrectconfig-gw7-node1-1 bash -c ps ax | grep 'clickhouse' | grep -v 'grep' | grep -v 'coproc' | grep -v 'bash -c' | awk '{print $1}'] http://localhost:None "POST /v1.46/exec/301202a11fc0fada7062ec5f8b7ec75312585deac2b4aab657346ff213ec7b38/start HTTP/1.1" 200 0 http://localhost:None "GET /v1.46/exec/301202a11fc0fada7062ec5f8b7ec75312585deac2b4aab657346ff213ec7b38/json HTTP/1.1" 200 586 run container_id:roottestkeeperbrokenlogs-gw9-node3-1 detach:False nothrow:False cmd: ['rm', '-rf', '/var/lib/clickhouse/coordination/log'] Command:[docker exec roottestkeeperbrokenlogs-gw9-node3-1 rm -rf /var/lib/clickhouse/coordination/log] run container_id:roottestkeeperbrokenlogs-gw9-node2-1 detach:False nothrow:False cmd: ['bash', '-c', "ps ax | grep 'clickhouse' | grep -v 'grep' | grep -v 'coproc' | grep -v 'bash -c' | awk '{print $1}'"] Command:[docker exec roottestkeeperbrokenlogs-gw9-node2-1 bash -c ps ax | grep 'clickhouse' | grep -v 'grep' | grep -v 'coproc' | grep -v 'bash -c' | awk '{print $1}'] Stdout:2083 Clickhouse process running. run container_id:roottestkeeperincorrectconfig-gw7-node1-1 detach:False nothrow:False cmd: ['bash', '-c', "ps ax | grep 'clickhouse' | grep -v 'grep' | grep -v 'coproc' | grep -v 'bash -c' | awk '{print $1}'"] Command:[docker exec roottestkeeperincorrectconfig-gw7-node1-1 bash -c ps ax | grep 'clickhouse' | grep -v 'grep' | grep -v 'coproc' | grep -v 'bash -c' | awk '{print $1}'] run container_id:roottestkeeperbrokenlogs-gw9-node3-1 detach:False nothrow:False cmd: ['rm', '-rf', '/var/lib/clickhouse/coordination/snapshots'] Command:[docker exec roottestkeeperbrokenlogs-gw9-node3-1 rm -rf /var/lib/clickhouse/coordination/snapshots] No clickhouse process running. Start new one. http://localhost:None "POST /v1.46/containers/roottestkeeperbrokenlogs-gw9-node2-1/exec HTTP/1.1" 201 74 Stdout:2083 Executing query select 20 on node1 http://localhost:None "POST /v1.46/exec/c11fc80cbae8fa192ab1a1af1f6f27aca6f9b1305131c691992515b5ef276951/start HTTP/1.1" 200 0 http://localhost:None "GET /v1.46/exec/c11fc80cbae8fa192ab1a1af1f6f27aca6f9b1305131c691992515b5ef276951/json HTTP/1.1" 200 586 run container_id:roottestkeeperbrokenlogs-gw9-node3-1 detach:False nothrow:False cmd: ['bash', '-c', "ps ax | grep 'clickhouse' | grep -v 'grep' | grep -v 'coproc' | grep -v 'bash -c' | awk '{print $1}'"] Command:[docker exec roottestkeeperbrokenlogs-gw9-node3-1 bash -c ps ax | grep 'clickhouse' | grep -v 'grep' | grep -v 'coproc' | grep -v 'bash -c' | awk '{print $1}'] No clickhouse process running. Start new one. http://localhost:None "POST /v1.46/containers/roottestkeeperbrokenlogs-gw9-node3-1/exec HTTP/1.1" 201 74 Executing query INSERT INTO dist SELECT number, randomPrintableASCII(100) FROM numbers(10000) on n1 http://localhost:None "POST /v1.46/exec/59d71eb1ee9f0c87516ce9cb3c95fe07439662c4911f115f98b31813b5a1aaee/start HTTP/1.1" 200 0 http://localhost:None "GET /v1.46/exec/59d71eb1ee9f0c87516ce9cb3c95fe07439662c4911f115f98b31813b5a1aaee/json HTTP/1.1" 200 586 run container_id:roottestinsertdistributedasyncsend-gw0-n1-1 detach:False nothrow:False cmd: ['bash', '-c', 'wc -c < /var/lib/clickhouse/data/default/dist/shard1_replica2/2.bin'] Command:[docker exec roottestinsertdistributedasyncsend-gw0-n1-1 bash -c wc -c < /var/lib/clickhouse/data/default/dist/shard1_replica2/2.bin] Stdout:1054688 run container_id:roottestinsertdistributedasyncsend-gw0-n1-1 detach:False nothrow:False cmd: ['bash', '-c', 'mv /var/lib/clickhouse/data/default/dist/shard1_replica2/2.bin /tmp/bin && head -c 1054678 /tmp/bin > /var/lib/clickhouse/data/default/dist/shard1_replica2/2.bin'] Command:[docker exec roottestinsertdistributedasyncsend-gw0-n1-1 bash -c mv /var/lib/clickhouse/data/default/dist/shard1_replica2/2.bin /tmp/bin && head -c 1054678 /tmp/bin > /var/lib/clickhouse/data/default/dist/shard1_replica2/2.bin] Executing query SYSTEM FLUSH DISTRIBUTED dist on n1 run container_id:roottestencrypteddisk-gw1-node-1 detach:False nothrow:False cmd: ['bash', '-c', "ps ax | grep 'clickhouse' | grep -v 'grep' | grep -v 'coproc' | grep -v 'bash -c' | awk '{print $1}'"] Command:[docker exec roottestencrypteddisk-gw1-node-1 bash -c ps ax | grep 'clickhouse' | grep -v 'grep' | grep -v 'coproc' | grep -v 'bash -c' | awk '{print $1}'] Stdout:1061 Clickhouse process running. run container_id:roottestencrypteddisk-gw1-node-1 detach:False nothrow:False cmd: ['bash', '-c', "ps ax | grep 'clickhouse' | grep -v 'grep' | grep -v 'coproc' | grep -v 'bash -c' | awk '{print $1}'"] Command:[docker exec roottestencrypteddisk-gw1-node-1 bash -c ps ax | grep 'clickhouse' | grep -v 'grep' | grep -v 'coproc' | grep -v 'bash -c' | awk '{print $1}'] Stdout:1061 Executing query select 20 on node Executing query SYSTEM FLUSH DISTRIBUTED dist on n1 Executing query select 20 on node1 run container_id:roottestinsertdistributedasyncsend-gw0-n1-1 detach:False nothrow:False cmd: ['bash', '-c', 'ls /var/lib/clickhouse/data/default/dist/shard1_replica2/broken/2.bin'] Command:[docker exec roottestinsertdistributedasyncsend-gw0-n1-1 bash -c ls /var/lib/clickhouse/data/default/dist/shard1_replica2/broken/2.bin] Stdout:/var/lib/clickhouse/data/default/dist/shard1_replica2/broken/2.bin Executing query SELECT count() FROM data on n1 run container_id:roottestkeeperbrokenlogs-gw9-node1-1 detach:False nothrow:False cmd: ['bash', '-c', "ps ax | grep 'clickhouse' | grep -v 'grep' | grep -v 'coproc' | grep -v 'bash -c' | awk '{print $1}'"] Command:[docker exec roottestkeeperbrokenlogs-gw9-node1-1 bash -c ps ax | grep 'clickhouse' | grep -v 'grep' | grep -v 'coproc' | grep -v 'bash -c' | awk '{print $1}'] Stdout:805 Clickhouse process running. run container_id:roottestkeeperbrokenlogs-gw9-node1-1 detach:False nothrow:False cmd: ['bash', '-c', "ps ax | grep 'clickhouse' | grep -v 'grep' | grep -v 'coproc' | grep -v 'bash -c' | awk '{print $1}'"] Command:[docker exec roottestkeeperbrokenlogs-gw9-node1-1 bash -c ps ax | grep 'clickhouse' | grep -v 'grep' | grep -v 'coproc' | grep -v 'bash -c' | awk '{print $1}'] run container_id:roottestkeeperbrokenlogs-gw9-node2-1 detach:False nothrow:False cmd: ['bash', '-c', "ps ax | grep 'clickhouse' | grep -v 'grep' | grep -v 'coproc' | grep -v 'bash -c' | awk '{print $1}'"] Command:[docker exec roottestkeeperbrokenlogs-gw9-node2-1 bash -c ps ax | grep 'clickhouse' | grep -v 'grep' | grep -v 'coproc' | grep -v 'bash -c' | awk '{print $1}'] Stdout:805 Executing query select 20 on node1 Stdout:792 Clickhouse process running. run container_id:roottestkeeperbrokenlogs-gw9-node2-1 detach:False nothrow:False cmd: ['bash', '-c', "ps ax | grep 'clickhouse' | grep -v 'grep' | grep -v 'coproc' | grep -v 'bash -c' | awk '{print $1}'"] Command:[docker exec roottestkeeperbrokenlogs-gw9-node2-1 bash -c ps ax | grep 'clickhouse' | grep -v 'grep' | grep -v 'coproc' | grep -v 'bash -c' | awk '{print $1}'] run container_id:roottestkeeperbrokenlogs-gw9-node3-1 detach:False nothrow:False cmd: ['bash', '-c', "ps ax | grep 'clickhouse' | grep -v 'grep' | grep -v 'coproc' | grep -v 'bash -c' | awk '{print $1}'"] Command:[docker exec roottestkeeperbrokenlogs-gw9-node3-1 bash -c ps ax | grep 'clickhouse' | grep -v 'grep' | grep -v 'coproc' | grep -v 'bash -c' | awk '{print $1}'] Stdout:792 Executing query select 20 on node2 Stdout:794 Clickhouse process running. run container_id:roottestkeeperbrokenlogs-gw9-node3-1 detach:False nothrow:False cmd: ['bash', '-c', "ps ax | grep 'clickhouse' | grep -v 'grep' | grep -v 'coproc' | grep -v 'bash -c' | awk '{print $1}'"] Command:[docker exec roottestkeeperbrokenlogs-gw9-node3-1 bash -c ps ax | grep 'clickhouse' | grep -v 'grep' | grep -v 'coproc' | grep -v 'bash -c' | awk '{print $1}'] Executing query SELECT count() FROM data on n2 Stdout:794 Executing query select 20 on node3 Executing query select 20 on node [gw0] PASSED test_insert_distributed_async_send/test.py::test_insert_distributed_async_send_truncated_1[1-0] test_insert_distributed_async_send/test.py::test_insert_distributed_async_send_truncated_1[1-1] Executing query DROP TABLE IF EXISTS data on n1 Executing query select 20 on node1 Executing query DROP TABLE IF EXISTS dist on n1 Executing query select 20 on node1 Executing query DROP TABLE IF EXISTS data on n2 Executing query select 20 on node2 Executing query DROP TABLE IF EXISTS dist on n2 Executing query select 20 on node3 Executing query select 20 on node Executing query DROP TABLE IF EXISTS data on n3 Executing query select 20 on node1 Executing query SELECT * FROM encrypted_test ORDER BY id FORMAT Values on node Executing query DROP TABLE IF EXISTS dist on n3 Executing query select 20 on node1 Executing query DROP TABLE encrypted_test SYNC; on node Executing query select 20 on node2 Executing query DROP TABLE IF EXISTS data on n4 Executing query select 20 on node3 Executing query DROP TABLE IF EXISTS encrypted_test; CREATE TABLE encrypted_test ( id Int64, data String ) ENGINE=MergeTree() ORDER BY id SETTINGS disk='encrypted_s3_cache' on node Executing query DROP TABLE IF EXISTS dist on n4 Executing query select 20 on node1 Executing query INSERT INTO encrypted_test VALUES (0,'data'),(1,'data') on node Executing query CREATE TABLE data (key Int, value String) Engine=MergeTree() ORDER BY key on n1 Connection dropped: socket connection error: None Executing query CREATE TABLE dist AS data Engine=Distributed( insert_distributed_async_send_cluster_two_replicas, currentDatabase(), data, key ) on n1 Executing query select 20 on node1 Executing query SELECT * FROM encrypted_test ORDER BY id FORMAT Values on node Executing query select 20 on node2 Executing query select 20 on node3 Executing query SYSTEM STOP DISTRIBUTED SENDS dist on n1 run container_id:roottestencrypteddisk-gw1-node-1 detach:False nothrow:True cmd: ['bash', '-c', 'ps -C clickhouse'] Command:[docker exec -u root roottestencrypteddisk-gw1-node-1 bash -c ps -C clickhouse] Stdout: PID TTY TIME CMD Stdout: 1061 ? 00:00:04 clickhouse run container_id:roottestencrypteddisk-gw1-node-1 detach:False nothrow:False cmd: ['bash', '-c', 'pkill clickhouse'] Command:[docker exec -u root roottestencrypteddisk-gw1-node-1 bash -c pkill clickhouse] run container_id:roottestencrypteddisk-gw1-node-1 detach:False nothrow:False cmd: ['bash', '-c', "ps ax | grep 'clickhouse' | grep -v 'grep' | grep -v 'coproc' | grep -v 'bash -c' | awk '{print $1}'"] Command:[docker exec roottestencrypteddisk-gw1-node-1 bash -c ps ax | grep 'clickhouse' | grep -v 'grep' | grep -v 'coproc' | grep -v 'bash -c' | awk '{print $1}'] Stdout:1061 Executing query select 20 on node1 Waiting until keeper will be ready on node1:9181 (timeout=30.000000) Sending mntr to :9181 get_instance_ip instance_name=node1 http://localhost:None "GET /v1.46/containers/roottestkeeperbrokenlogs-gw9-node1-1/json HTTP/1.1" 200 None get_instance_ip instance_name=node1 http://localhost:None "GET /v1.46/containers/roottestkeeperbrokenlogs-gw9-node1-1/json HTTP/1.1" 200 None Waiting until keeper can create sessions on 172.16.3.2:9181 (timeout=30.000000) Connecting to 172.16.3.2(172.16.3.2):9181, use_ssl: False Sending request(xid=None): Connect(protocol_version=0, last_zxid_seen=0, time_out=29993, session_id=0, passwd=b'\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00', read_only=None) Zookeeper connection established, state: CONNECTED Sending request(xid=1): GetData(path='/keeper/api_version', watcher=None) Received response(xid=1): (b'2', ZnodeStat(czxid=0, mzxid=0, ctime=0, mtime=0, version=0, cversion=0, aversion=0, ephemeralOwner=0, dataLength=1, numChildren=0, pzxid=0)) Sending request(xid=2): Close() Executing query CREATE TABLE data (key Int, value String) Engine=MergeTree() ORDER BY key on n2 Connection dropped: socket connection broken Transition to CONNECTING Zookeeper connection lost Failed connecting to Zookeeper within the connection retry policy. Zookeeper session closed, state: CLOSED Waiting until keeper will be ready on node2:9181 (timeout=30.000000) Sending mntr to :9181 get_instance_ip instance_name=node2 http://localhost:None "GET /v1.46/containers/roottestkeeperbrokenlogs-gw9-node2-1/json HTTP/1.1" 200 None get_instance_ip instance_name=node2 http://localhost:None "GET /v1.46/containers/roottestkeeperbrokenlogs-gw9-node2-1/json HTTP/1.1" 200 None Waiting until keeper can create sessions on 172.16.3.4:9181 (timeout=30.000000) Connecting to 172.16.3.4(172.16.3.4):9181, use_ssl: False Sending request(xid=None): Connect(protocol_version=0, last_zxid_seen=0, time_out=29994, session_id=0, passwd=b'\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00', read_only=None) Zookeeper connection established, state: CONNECTED Sending request(xid=1): GetData(path='/keeper/api_version', watcher=None) Received response(xid=1): (b'2', ZnodeStat(czxid=0, mzxid=0, ctime=0, mtime=0, version=0, cversion=0, aversion=0, ephemeralOwner=0, dataLength=1, numChildren=0, pzxid=0)) Sending request(xid=2): Close() Connection dropped: socket connection broken Transition to CONNECTING Zookeeper connection lost Failed connecting to Zookeeper within the connection retry policy. Zookeeper session closed, state: CLOSED Waiting until keeper will be ready on node3:9181 (timeout=30.000000) Sending mntr to :9181 get_instance_ip instance_name=node3 http://localhost:None "GET /v1.46/containers/roottestkeeperbrokenlogs-gw9-node3-1/json HTTP/1.1" 200 None get_instance_ip instance_name=node3 http://localhost:None "GET /v1.46/containers/roottestkeeperbrokenlogs-gw9-node3-1/json HTTP/1.1" 200 None Waiting until keeper can create sessions on 172.16.3.3:9181 (timeout=30.000000) Connecting to 172.16.3.3(172.16.3.3):9181, use_ssl: False Sending request(xid=None): Connect(protocol_version=0, last_zxid_seen=0, time_out=29993, session_id=0, passwd=b'\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00', read_only=None) Zookeeper connection established, state: CONNECTED Sending request(xid=1): GetData(path='/keeper/api_version', watcher=None) Received response(xid=1): (b'2', ZnodeStat(czxid=0, mzxid=0, ctime=0, mtime=0, version=0, cversion=0, aversion=0, ephemeralOwner=0, dataLength=1, numChildren=0, pzxid=0)) Sending request(xid=2): Close() Connection dropped: socket connection broken Transition to CONNECTING Zookeeper connection lost Executing query CREATE TABLE dist AS data Engine=Distributed( insert_distributed_async_send_cluster_two_replicas, currentDatabase(), data, key ) on n2 Failed connecting to Zookeeper within the connection retry policy. Zookeeper session closed, state: CLOSED get_instance_ip instance_name=node1 http://localhost:None "GET /v1.46/containers/roottestkeeperbrokenlogs-gw9-node1-1/json HTTP/1.1" 200 None Connecting to 172.16.3.2(172.16.3.2):9181, use_ssl: False Sending request(xid=None): Connect(protocol_version=0, last_zxid_seen=0, time_out=30000, session_id=0, passwd=b'\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00', read_only=None) Zookeeper connection established, state: CONNECTED Sending request(xid=1): Create(path='/test_broken_log', data=b'', acl=[ACL(perms=31, acl_list=['ALL'], id=Id(scheme='world', id='anyone'))], flags=0) Received response(xid=1): '/test_broken_log' Sending request(xid=2): Create(path='/test_broken_log/node', data=b'somedata1', acl=[ACL(perms=31, acl_list=['ALL'], id=Id(scheme='world', id='anyone'))], flags=2) Received response(xid=2): '/test_broken_log/node0000000000' Sending request(xid=3): Create(path='/test_broken_log/node', data=b'somedata1', acl=[ACL(perms=31, acl_list=['ALL'], id=Id(scheme='world', id='anyone'))], flags=2) Received response(xid=3): '/test_broken_log/node0000000001' Sending request(xid=4): Create(path='/test_broken_log/node', data=b'somedata1', acl=[ACL(perms=31, acl_list=['ALL'], id=Id(scheme='world', id='anyone'))], flags=2) Received response(xid=4): '/test_broken_log/node0000000002' Sending request(xid=5): Create(path='/test_broken_log/node', data=b'somedata1', acl=[ACL(perms=31, acl_list=['ALL'], id=Id(scheme='world', id='anyone'))], flags=2) Received response(xid=5): '/test_broken_log/node0000000003' Sending request(xid=6): Create(path='/test_broken_log/node', data=b'somedata1', acl=[ACL(perms=31, acl_list=['ALL'], id=Id(scheme='world', id='anyone'))], flags=2) Received response(xid=6): '/test_broken_log/node0000000004' Sending request(xid=7): Create(path='/test_broken_log/node', data=b'somedata1', acl=[ACL(perms=31, acl_list=['ALL'], id=Id(scheme='world', id='anyone'))], flags=2) Received response(xid=7): '/test_broken_log/node0000000005' Sending request(xid=8): Create(path='/test_broken_log/node', data=b'somedata1', acl=[ACL(perms=31, acl_list=['ALL'], id=Id(scheme='world', id='anyone'))], flags=2) Received response(xid=8): '/test_broken_log/node0000000006' Sending request(xid=9): Create(path='/test_broken_log/node', data=b'somedata1', acl=[ACL(perms=31, acl_list=['ALL'], id=Id(scheme='world', id='anyone'))], flags=2) Received response(xid=9): '/test_broken_log/node0000000007' Sending request(xid=10): Create(path='/test_broken_log/node', data=b'somedata1', acl=[ACL(perms=31, acl_list=['ALL'], id=Id(scheme='world', id='anyone'))], flags=2) Received response(xid=10): '/test_broken_log/node0000000008' Sending request(xid=11): Create(path='/test_broken_log/node', data=b'somedata1', acl=[ACL(perms=31, acl_list=['ALL'], id=Id(scheme='world', id='anyone'))], flags=2) Received response(xid=11): '/test_broken_log/node0000000009' Sending request(xid=12): GetChildren(path='/test_broken_log', watcher=None) Received response(xid=12): ['node0000000002', 'node0000000000', 'node0000000008', 'node0000000004', 'node0000000003', 'node0000000001', 'node0000000007', 'node0000000005', 'node0000000009', 'node0000000006'] Sending request(xid=13): GetData(path='/test_broken_log/node0000000002', watcher=None) Received response(xid=13): (b'somedata1', ZnodeStat(czxid=7, mzxid=7, ctime=1743560879547, mtime=1743560879547, version=0, cversion=0, aversion=0, ephemeralOwner=0, dataLength=9, numChildren=0, pzxid=7)) Sending request(xid=14): GetData(path='/test_broken_log/node0000000000', watcher=None) Received response(xid=14): (b'somedata1', ZnodeStat(czxid=5, mzxid=5, ctime=1743560879540, mtime=1743560879540, version=0, cversion=0, aversion=0, ephemeralOwner=0, dataLength=9, numChildren=0, pzxid=5)) Sending request(xid=15): GetData(path='/test_broken_log/node0000000008', watcher=None) Received response(xid=15): (b'somedata1', ZnodeStat(czxid=13, mzxid=13, ctime=1743560879572, mtime=1743560879572, version=0, cversion=0, aversion=0, ephemeralOwner=0, dataLength=9, numChildren=0, pzxid=13)) Sending request(xid=16): GetData(path='/test_broken_log/node0000000004', watcher=None) Received response(xid=16): (b'somedata1', ZnodeStat(czxid=9, mzxid=9, ctime=1743560879554, mtime=1743560879554, version=0, cversion=0, aversion=0, ephemeralOwner=0, dataLength=9, numChildren=0, pzxid=9)) Sending request(xid=17): GetData(path='/test_broken_log/node0000000003', watcher=None) Received response(xid=17): (b'somedata1', ZnodeStat(czxid=8, mzxid=8, ctime=1743560879551, mtime=1743560879551, version=0, cversion=0, aversion=0, ephemeralOwner=0, dataLength=9, numChildren=0, pzxid=8)) Sending request(xid=18): GetData(path='/test_broken_log/node0000000001', watcher=None) Received response(xid=18): (b'somedata1', ZnodeStat(czxid=6, mzxid=6, ctime=1743560879544, mtime=1743560879544, version=0, cversion=0, aversion=0, ephemeralOwner=0, dataLength=9, numChildren=0, pzxid=6)) Sending request(xid=19): GetData(path='/test_broken_log/node0000000007', watcher=None) Received response(xid=19): (b'somedata1', ZnodeStat(czxid=12, mzxid=12, ctime=1743560879568, mtime=1743560879568, version=0, cversion=0, aversion=0, ephemeralOwner=0, dataLength=9, numChildren=0, pzxid=12)) Sending request(xid=20): GetData(path='/test_broken_log/node0000000005', watcher=None) Received response(xid=20): (b'somedata1', ZnodeStat(czxid=10, mzxid=10, ctime=1743560879560, mtime=1743560879560, version=0, cversion=0, aversion=0, ephemeralOwner=0, dataLength=9, numChildren=0, pzxid=10)) Sending request(xid=21): GetData(path='/test_broken_log/node0000000009', watcher=None) Received response(xid=21): (b'somedata1', ZnodeStat(czxid=14, mzxid=14, ctime=1743560879575, mtime=1743560879575, version=0, cversion=0, aversion=0, ephemeralOwner=0, dataLength=9, numChildren=0, pzxid=14)) Sending request(xid=22): GetData(path='/test_broken_log/node0000000006', watcher=None) Received response(xid=22): (b'somedata1', ZnodeStat(czxid=11, mzxid=11, ctime=1743560879564, mtime=1743560879564, version=0, cversion=0, aversion=0, ephemeralOwner=0, dataLength=9, numChildren=0, pzxid=11)) Sending request(xid=23): Close() Connection dropped: socket connection broken Transition to CONNECTING Zookeeper connection lost Executing query SYSTEM STOP DISTRIBUTED SENDS dist on n2 Failed connecting to Zookeeper within the connection retry policy. Zookeeper session closed, state: CLOSED run container_id:roottestkeeperbrokenlogs-gw9-node1-1 detach:False nothrow:True cmd: ['bash', '-c', 'ps -C clickhouse'] Command:[docker exec -u root roottestkeeperbrokenlogs-gw9-node1-1 bash -c ps -C clickhouse] Stdout: PID TTY TIME CMD Stdout: 805 ? 00:00:02 clickhouse run container_id:roottestkeeperbrokenlogs-gw9-node1-1 detach:False nothrow:False cmd: ['bash', '-c', 'pkill clickhouse'] Command:[docker exec -u root roottestkeeperbrokenlogs-gw9-node1-1 bash -c pkill clickhouse] run container_id:roottestkeeperbrokenlogs-gw9-node1-1 detach:False nothrow:False cmd: ['bash', '-c', "ps ax | grep 'clickhouse' | grep -v 'grep' | grep -v 'coproc' | grep -v 'bash -c' | awk '{print $1}'"] Command:[docker exec roottestkeeperbrokenlogs-gw9-node1-1 bash -c ps ax | grep 'clickhouse' | grep -v 'grep' | grep -v 'coproc' | grep -v 'bash -c' | awk '{print $1}'] Executing query CREATE TABLE data (key Int, value String) Engine=MergeTree() ORDER BY key on n3 Stdout:805 Executing query select 20 on node1 Executing query CREATE TABLE dist AS data Engine=Distributed( insert_distributed_async_send_cluster_two_replicas, currentDatabase(), data, key ) on n3 run container_id:roottestencrypteddisk-gw1-node-1 detach:False nothrow:False cmd: ['bash', '-c', "ps ax | grep 'clickhouse' | grep -v 'grep' | grep -v 'coproc' | grep -v 'bash -c' | awk '{print $1}'"] Command:[docker exec roottestencrypteddisk-gw1-node-1 bash -c ps ax | grep 'clickhouse' | grep -v 'grep' | grep -v 'coproc' | grep -v 'bash -c' | awk '{print $1}'] Stdout:1061 Executing query SYSTEM STOP DISTRIBUTED SENDS dist on n3 Executing query CREATE TABLE data (key Int, value String) Engine=MergeTree() ORDER BY key on n4 Executing query select 20 on node1 Executing query CREATE TABLE dist AS data Engine=Distributed( insert_distributed_async_send_cluster_two_replicas, currentDatabase(), data, key ) on n4 run container_id:roottestkeeperbrokenlogs-gw9-node1-1 detach:False nothrow:False cmd: ['bash', '-c', "ps ax | grep 'clickhouse' | grep -v 'grep' | grep -v 'coproc' | grep -v 'bash -c' | awk '{print $1}'"] Command:[docker exec roottestkeeperbrokenlogs-gw9-node1-1 bash -c ps ax | grep 'clickhouse' | grep -v 'grep' | grep -v 'coproc' | grep -v 'bash -c' | awk '{print $1}'] Stdout:805 Executing query SYSTEM STOP DISTRIBUTED SENDS dist on n4 Executing query INSERT INTO dist SELECT number, randomPrintableASCII(100) FROM numbers(10000) on n3 Executing query select 20 on node1 run container_id:roottestencrypteddisk-gw1-node-1 detach:False nothrow:False cmd: ['bash', '-c', "ps ax | grep 'clickhouse' | grep -v 'grep' | grep -v 'coproc' | grep -v 'bash -c' | awk '{print $1}'"] Command:[docker exec roottestencrypteddisk-gw1-node-1 bash -c ps ax | grep 'clickhouse' | grep -v 'grep' | grep -v 'coproc' | grep -v 'bash -c' | awk '{print $1}'] Stdout:1061 run container_id:roottestinsertdistributedasyncsend-gw0-n3-1 detach:False nothrow:False cmd: ['bash', '-c', 'wc -c < /var/lib/clickhouse/data/default/dist/shard1_replica2/2.bin'] Command:[docker exec roottestinsertdistributedasyncsend-gw0-n3-1 bash -c wc -c < /var/lib/clickhouse/data/default/dist/shard1_replica2/2.bin] Stdout:1054755 run container_id:roottestinsertdistributedasyncsend-gw0-n3-1 detach:False nothrow:False cmd: ['bash', '-c', 'mv /var/lib/clickhouse/data/default/dist/shard1_replica2/2.bin /tmp/bin && head -c 1054745 /tmp/bin > /var/lib/clickhouse/data/default/dist/shard1_replica2/2.bin'] Command:[docker exec roottestinsertdistributedasyncsend-gw0-n3-1 bash -c mv /var/lib/clickhouse/data/default/dist/shard1_replica2/2.bin /tmp/bin && head -c 1054745 /tmp/bin > /var/lib/clickhouse/data/default/dist/shard1_replica2/2.bin] Executing query SYSTEM FLUSH DISTRIBUTED dist on n3 Executing query SYSTEM FLUSH DISTRIBUTED dist on n3 Executing query select 20 on node1 run container_id:roottestkeeperbrokenlogs-gw9-node1-1 detach:False nothrow:False cmd: ['bash', '-c', "ps ax | grep 'clickhouse' | grep -v 'grep' | grep -v 'coproc' | grep -v 'bash -c' | awk '{print $1}'"] Command:[docker exec roottestkeeperbrokenlogs-gw9-node1-1 bash -c ps ax | grep 'clickhouse' | grep -v 'grep' | grep -v 'coproc' | grep -v 'bash -c' | awk '{print $1}'] run container_id:roottestinsertdistributedasyncsend-gw0-n3-1 detach:False nothrow:False cmd: ['bash', '-c', 'ls /var/lib/clickhouse/data/default/dist/shard1_replica2/broken/2.bin'] Command:[docker exec roottestinsertdistributedasyncsend-gw0-n3-1 bash -c ls /var/lib/clickhouse/data/default/dist/shard1_replica2/broken/2.bin] Stdout:805 Stdout:/var/lib/clickhouse/data/default/dist/shard1_replica2/broken/2.bin Executing query SELECT count() FROM data on n3 Executing query SELECT count() FROM data on n4 run container_id:roottestencrypteddisk-gw1-node-1 detach:False nothrow:False cmd: ['bash', '-c', "ps ax | grep 'clickhouse' | grep -v 'grep' | grep -v 'coproc' | grep -v 'bash -c' | awk '{print $1}'"] Command:[docker exec roottestencrypteddisk-gw1-node-1 bash -c ps ax | grep 'clickhouse' | grep -v 'grep' | grep -v 'coproc' | grep -v 'bash -c' | awk '{print $1}'] Stdout:1061 [gw0] PASSED test_insert_distributed_async_send/test.py::test_insert_distributed_async_send_truncated_1[1-1] test_insert_distributed_async_send/test.py::test_insert_distributed_async_send_truncated_2[0] Executing query DROP TABLE IF EXISTS data on n1 run container_id:roottestkeeperincorrectconfig-gw7-node1-1 detach:False nothrow:False cmd: ['bash', '-c', "ps ax | grep 'clickhouse' | grep -v 'grep' | grep -v 'coproc' | grep -v 'bash -c' | awk '{print $1}'"] Command:[docker exec roottestkeeperincorrectconfig-gw7-node1-1 bash -c ps ax | grep 'clickhouse' | grep -v 'grep' | grep -v 'coproc' | grep -v 'bash -c' | awk '{print $1}'] Executing query DROP TABLE IF EXISTS dist on n1 Current start attempt failed. Will kill 2083 just in case. run container_id:roottestkeeperincorrectconfig-gw7-node1-1 detach:False nothrow:True cmd: ['bash', '-c', 'kill -9 2083'] Command:[docker exec -u root roottestkeeperincorrectconfig-gw7-node1-1 bash -c kill -9 2083] Stderr:bash: line 1: kill: (2083) - No such process Exitcode:1 Executing query DROP TABLE IF EXISTS data on n2 run container_id:roottestkeeperbrokenlogs-gw9-node1-1 detach:False nothrow:False cmd: ['bash', '-c', "ps ax | grep 'clickhouse' | grep -v 'grep' | grep -v 'coproc' | grep -v 'bash -c' | awk '{print $1}'"] Command:[docker exec roottestkeeperbrokenlogs-gw9-node1-1 bash -c ps ax | grep 'clickhouse' | grep -v 'grep' | grep -v 'coproc' | grep -v 'bash -c' | awk '{print $1}'] Executing query DROP TABLE IF EXISTS dist on n2 Sending stat to :9181 get_instance_ip instance_name=node2 http://localhost:None "GET /v1.46/containers/roottestkeeperbrokenlogs-gw9-node2-1/json HTTP/1.1" 200 None Sending stat to :9181 get_instance_ip instance_name=node3 http://localhost:None "GET /v1.46/containers/roottestkeeperbrokenlogs-gw9-node3-1/json HTTP/1.1" 200 None run container_id:roottestkeeperincorrectconfig-gw7-node1-1 detach:False nothrow:False cmd: ['bash', '-c', "ps ax | grep 'clickhouse' | grep -v 'grep' | grep -v 'coproc' | grep -v 'bash -c' | awk '{print $1}'"] Command:[docker exec roottestkeeperincorrectconfig-gw7-node1-1 bash -c ps ax | grep 'clickhouse' | grep -v 'grep' | grep -v 'coproc' | grep -v 'bash -c' | awk '{print $1}'] Executing query DROP TABLE IF EXISTS data on n3 No clickhouse process running. Start new one. http://localhost:None "POST /v1.46/containers/roottestkeeperincorrectconfig-gw7-node1-1/exec HTTP/1.1" 201 74 http://localhost:None "POST /v1.46/exec/e65ae6ec997037279aef624486a92ca655d637b9f0a5ae0fe4f5178ff16356c8/start HTTP/1.1" 200 0 http://localhost:None "GET /v1.46/exec/e65ae6ec997037279aef624486a92ca655d637b9f0a5ae0fe4f5178ff16356c8/json HTTP/1.1" 200 586 run container_id:roottestencrypteddisk-gw1-node-1 detach:False nothrow:False cmd: ['bash', '-c', "ps ax | grep 'clickhouse' | grep -v 'grep' | grep -v 'coproc' | grep -v 'bash -c' | awk '{print $1}'"] Command:[docker exec roottestencrypteddisk-gw1-node-1 bash -c ps ax | grep 'clickhouse' | grep -v 'grep' | grep -v 'coproc' | grep -v 'bash -c' | awk '{print $1}'] Executing query DROP TABLE IF EXISTS dist on n3 run container_id:roottestencrypteddisk-gw1-node-1 detach:False nothrow:False cmd: ['bash', '-c', "ps ax | grep 'clickhouse' | grep -v 'grep' | grep -v 'coproc' | grep -v 'bash -c' | awk '{print $1}'"] Command:[docker exec roottestencrypteddisk-gw1-node-1 bash -c ps ax | grep 'clickhouse' | grep -v 'grep' | grep -v 'coproc' | grep -v 'bash -c' | awk '{print $1}'] No clickhouse process running. Start new one. http://localhost:None "POST /v1.46/containers/roottestencrypteddisk-gw1-node-1/exec HTTP/1.1" 201 74 http://localhost:None "POST /v1.46/exec/725797558fe1f04a7d26338496b371a06e842f1169e0b0207432b6df4a106e45/start HTTP/1.1" 200 0 http://localhost:None "GET /v1.46/exec/725797558fe1f04a7d26338496b371a06e842f1169e0b0207432b6df4a106e45/json HTTP/1.1" 200 586 Executing query DROP TABLE IF EXISTS data on n4 Executing query DROP TABLE IF EXISTS dist on n4 Sending stat to :9181 get_instance_ip instance_name=node2 http://localhost:None "GET /v1.46/containers/roottestkeeperbrokenlogs-gw9-node2-1/json HTTP/1.1" 200 None Sending stat to :9181 get_instance_ip instance_name=node3 http://localhost:None "GET /v1.46/containers/roottestkeeperbrokenlogs-gw9-node3-1/json HTTP/1.1" 200 None Executing query CREATE TABLE data (key Int, value String) Engine=MergeTree() ORDER BY key on n1 run container_id:roottestkeeperincorrectconfig-gw7-node1-1 detach:False nothrow:False cmd: ['bash', '-c', "ps ax | grep 'clickhouse' | grep -v 'grep' | grep -v 'coproc' | grep -v 'bash -c' | awk '{print $1}'"] Command:[docker exec roottestkeeperincorrectconfig-gw7-node1-1 bash -c ps ax | grep 'clickhouse' | grep -v 'grep' | grep -v 'coproc' | grep -v 'bash -c' | awk '{print $1}'] Stdout:2720 Clickhouse process running. run container_id:roottestkeeperincorrectconfig-gw7-node1-1 detach:False nothrow:False cmd: ['bash', '-c', "ps ax | grep 'clickhouse' | grep -v 'grep' | grep -v 'coproc' | grep -v 'bash -c' | awk '{print $1}'"] Command:[docker exec roottestkeeperincorrectconfig-gw7-node1-1 bash -c ps ax | grep 'clickhouse' | grep -v 'grep' | grep -v 'coproc' | grep -v 'bash -c' | awk '{print $1}'] Executing query CREATE TABLE dist AS data Engine=Distributed( insert_distributed_async_send_cluster_two_replicas, currentDatabase(), data, key ) on n1 Stdout:2720 Executing query select 20 on node1 run container_id:roottestencrypteddisk-gw1-node-1 detach:False nothrow:False cmd: ['bash', '-c', "ps ax | grep 'clickhouse' | grep -v 'grep' | grep -v 'coproc' | grep -v 'bash -c' | awk '{print $1}'"] Command:[docker exec roottestencrypteddisk-gw1-node-1 bash -c ps ax | grep 'clickhouse' | grep -v 'grep' | grep -v 'coproc' | grep -v 'bash -c' | awk '{print $1}'] Executing query SYSTEM STOP DISTRIBUTED SENDS dist on n1 Stdout:1875 Clickhouse process running. run container_id:roottestencrypteddisk-gw1-node-1 detach:False nothrow:False cmd: ['bash', '-c', "ps ax | grep 'clickhouse' | grep -v 'grep' | grep -v 'coproc' | grep -v 'bash -c' | awk '{print $1}'"] Command:[docker exec roottestencrypteddisk-gw1-node-1 bash -c ps ax | grep 'clickhouse' | grep -v 'grep' | grep -v 'coproc' | grep -v 'bash -c' | awk '{print $1}'] Stdout:1875 Executing query select 20 on node Executing query CREATE TABLE data (key Int, value String) Engine=MergeTree() ORDER BY key on n2 Stderr: zoo2 Skipped - Image is already being pulled by zoo1 Stderr: zoo3 Skipped - Image is already being pulled by zoo1 Stderr: node Skipped - Image is already being pulled by zoo1 Stderr: zoo1 Pulling Stderr: zoo1 Pulled Setup ZooKeeper Creating internal ZooKeeper dirs: ['/ClickHouse/tests/integration/test_keeper_availability_zone/_instances-0-gw4/keeper1/log', '/ClickHouse/tests/integration/test_keeper_availability_zone/_instances-0-gw4/keeper1/config', '/ClickHouse/tests/integration/test_keeper_availability_zone/_instances-0-gw4/keeper1/coordination', '/ClickHouse/tests/integration/test_keeper_availability_zone/_instances-0-gw4/keeper2/log', '/ClickHouse/tests/integration/test_keeper_availability_zone/_instances-0-gw4/keeper2/config', '/ClickHouse/tests/integration/test_keeper_availability_zone/_instances-0-gw4/keeper2/coordination', '/ClickHouse/tests/integration/test_keeper_availability_zone/_instances-0-gw4/keeper3/log', '/ClickHouse/tests/integration/test_keeper_availability_zone/_instances-0-gw4/keeper3/config', '/ClickHouse/tests/integration/test_keeper_availability_zone/_instances-0-gw4/keeper3/coordination'] Command:[docker compose --project-name roottestkeeperavailabilityzone-gw4 --env-file /ClickHouse/tests/integration/test_keeper_availability_zone/_instances-0-gw4/.env --file /ClickHouse/tests/integration/helpers/../../../tests/integration/compose/docker_compose_keeper.yml --verbose up -d] Executing query CREATE TABLE dist AS data Engine=Distributed( insert_distributed_async_send_cluster_two_replicas, currentDatabase(), data, key ) on n2 Executing query select 20 on node1 Sending stat to :9181 get_instance_ip instance_name=node2 http://localhost:None "GET /v1.46/containers/roottestkeeperbrokenlogs-gw9-node2-1/json HTTP/1.1" 200 None run container_id:roottestkeeperbrokenlogs-gw9-node1-1 detach:False nothrow:False cmd: ['truncate', '-s', '-50', '/var/lib/clickhouse/coordination/log/changelog_1_100000.bin'] Command:[docker exec roottestkeeperbrokenlogs-gw9-node1-1 truncate -s -50 /var/lib/clickhouse/coordination/log/changelog_1_100000.bin] run container_id:roottestkeeperbrokenlogs-gw9-node1-1 detach:False nothrow:False cmd: ['bash', '-c', "ps ax | grep 'clickhouse' | grep -v 'grep' | grep -v 'coproc' | grep -v 'bash -c' | awk '{print $1}'"] Command:[docker exec roottestkeeperbrokenlogs-gw9-node1-1 bash -c ps ax | grep 'clickhouse' | grep -v 'grep' | grep -v 'coproc' | grep -v 'bash -c' | awk '{print $1}'] No clickhouse process running. Start new one. http://localhost:None "POST /v1.46/containers/roottestkeeperbrokenlogs-gw9-node1-1/exec HTTP/1.1" 201 74 http://localhost:None "POST /v1.46/exec/1220d040359936decbec2d64898af77b41e68165822323e49315097c6689412d/start HTTP/1.1" 200 0 http://localhost:None "GET /v1.46/exec/1220d040359936decbec2d64898af77b41e68165822323e49315097c6689412d/json HTTP/1.1" 200 586 Executing query SYSTEM STOP DISTRIBUTED SENDS dist on n2 Executing query select 20 on node Stderr:time="2025-04-02T02:28:04Z" level=trace msg="Docker Desktop integration not enabled" Stderr: Network roottestkeeperavailabilityzone-gw4_default Creating Stderr: Network roottestkeeperavailabilityzone-gw4_default Created Stderr: Container roottestkeeperavailabilityzone-gw4-zoo3-1 Creating Stderr: Container roottestkeeperavailabilityzone-gw4-zoo1-1 Creating Stderr: Container roottestkeeperavailabilityzone-gw4-zoo2-1 Creating Stderr: Container roottestkeeperavailabilityzone-gw4-zoo3-1 Created Stderr: Container roottestkeeperavailabilityzone-gw4-zoo1-1 Created Stderr: Container roottestkeeperavailabilityzone-gw4-zoo2-1 Created Stderr: Container roottestkeeperavailabilityzone-gw4-zoo3-1 Starting Stderr: Container roottestkeeperavailabilityzone-gw4-zoo2-1 Starting Stderr: Container roottestkeeperavailabilityzone-gw4-zoo1-1 Starting Stderr: Container roottestkeeperavailabilityzone-gw4-zoo3-1 Started Stderr: Container roottestkeeperavailabilityzone-gw4-zoo2-1 Started Stderr: Container roottestkeeperavailabilityzone-gw4-zoo1-1 Started Stderr:time="2025-04-02T02:28:05Z" level=debug msg="otel error" error="" Stderr:time="2025-04-02T02:28:05Z" level=debug msg="otel error" error="" Wait ZooKeeper to start get_instance_ip instance_name=zoo1 http://localhost:None "GET /v1.46/containers/roottestkeeperavailabilityzone-gw4-zoo1-1/json HTTP/1.1" 200 None get_kazoo_client: zoo1, ip:172.16.2.4, port:2181, use_ssl:False Connecting to 172.16.2.4(172.16.2.4):2181, use_ssl: False Connection dropped: socket connection error: Connection refused Executing query CREATE TABLE data (key Int, value String) Engine=MergeTree() ORDER BY key on n3 Connecting to 172.16.2.4(172.16.2.4):2181, use_ssl: False Connection dropped: socket connection error: Connection refused Executing query select 20 on node1 Executing query CREATE TABLE dist AS data Engine=Distributed( insert_distributed_async_send_cluster_two_replicas, currentDatabase(), data, key ) on n3 Connecting to 172.16.2.4(172.16.2.4):2181, use_ssl: False Connection dropped: socket connection error: Connection refused Executing query SYSTEM STOP DISTRIBUTED SENDS dist on n3 Executing query select 20 on node Executing query CREATE TABLE data (key Int, value String) Engine=MergeTree() ORDER BY key on n4 run container_id:roottestkeeperbrokenlogs-gw9-node1-1 detach:False nothrow:False cmd: ['bash', '-c', "ps ax | grep 'clickhouse' | grep -v 'grep' | grep -v 'coproc' | grep -v 'bash -c' | awk '{print $1}'"] Command:[docker exec roottestkeeperbrokenlogs-gw9-node1-1 bash -c ps ax | grep 'clickhouse' | grep -v 'grep' | grep -v 'coproc' | grep -v 'bash -c' | awk '{print $1}'] Stdout:1614 Clickhouse process running. run container_id:roottestkeeperbrokenlogs-gw9-node1-1 detach:False nothrow:False cmd: ['bash', '-c', "ps ax | grep 'clickhouse' | grep -v 'grep' | grep -v 'coproc' | grep -v 'bash -c' | awk '{print $1}'"] Command:[docker exec roottestkeeperbrokenlogs-gw9-node1-1 bash -c ps ax | grep 'clickhouse' | grep -v 'grep' | grep -v 'coproc' | grep -v 'bash -c' | awk '{print $1}'] Executing query SELECT * FROM encrypted_test ORDER BY id FORMAT Values on node Stdout:1614 Executing query select 20 on node1 Executing query select 20 on node1 Connecting to 172.16.2.4(172.16.2.4):2181, use_ssl: False Connection dropped: socket connection error: Connection refused Executing query CREATE TABLE dist AS data Engine=Distributed( insert_distributed_async_send_cluster_two_replicas, currentDatabase(), data, key ) on n4 Executing query DROP TABLE encrypted_test SYNC; on node Executing query SYSTEM STOP DISTRIBUTED SENDS dist on n4 Executing query INSERT INTO dist SELECT number, randomPrintableASCII(100) FROM numbers(10000) on n2 Executing query DROP TABLE IF EXISTS encrypted_test SYNC on node [gw1] PASSED test_encrypted_disk/test.py::test_restart run container_id:roottestinsertdistributedasyncsend-gw0-n2-1 detach:False nothrow:False cmd: ['bash', '-c', 'wc -c < /var/lib/clickhouse/data/default/dist/shard1_replica2/2.bin'] Command:[docker exec roottestinsertdistributedasyncsend-gw0-n2-1 bash -c wc -c < /var/lib/clickhouse/data/default/dist/shard1_replica2/2.bin] Command:[docker compose --env-file /ClickHouse/tests/integration/test_encrypted_disk/_instances-0-gw1/.env --project-name roottestencrypteddisk-gw1 --file /ClickHouse/tests/integration/test_encrypted_disk/_instances-0-gw1/node/docker-compose.yml --file /ClickHouse/tests/integration/helpers/../../../tests/integration/compose/docker_compose_minio.yml stop --timeout 20] Executing query select 20 on node1 Executing query select 20 on node1 Stdout:1054718 run container_id:roottestinsertdistributedasyncsend-gw0-n2-1 detach:False nothrow:False cmd: ['bash', '-c', 'mv /var/lib/clickhouse/data/default/dist/shard1_replica2/2.bin /tmp/bin && head -c 10000 /tmp/bin > /var/lib/clickhouse/data/default/dist/shard1_replica2/2.bin'] Command:[docker exec roottestinsertdistributedasyncsend-gw0-n2-1 bash -c mv /var/lib/clickhouse/data/default/dist/shard1_replica2/2.bin /tmp/bin && head -c 10000 /tmp/bin > /var/lib/clickhouse/data/default/dist/shard1_replica2/2.bin] Executing query SYSTEM FLUSH DISTRIBUTED dist on n2 Executing query SYSTEM FLUSH DISTRIBUTED dist on n2 Connecting to 172.16.2.4(172.16.2.4):2181, use_ssl: False Connection dropped: socket connection error: Connection refused run container_id:roottestinsertdistributedasyncsend-gw0-n2-1 detach:False nothrow:False cmd: ['bash', '-c', 'ls /var/lib/clickhouse/data/default/dist/shard1_replica2/broken/2.bin'] Command:[docker exec roottestinsertdistributedasyncsend-gw0-n2-1 bash -c ls /var/lib/clickhouse/data/default/dist/shard1_replica2/broken/2.bin] Stdout:/var/lib/clickhouse/data/default/dist/shard1_replica2/broken/2.bin Executing query SELECT count() FROM data on n1 Executing query select 20 on node1 Executing query select 20 on node1 Executing query SELECT count() FROM data on n2 Waiting until keeper will be ready on node1:9181 (timeout=30.000000) Sending mntr to :9181 get_instance_ip instance_name=node1 http://localhost:None "GET /v1.46/containers/roottestkeeperbrokenlogs-gw9-node1-1/json HTTP/1.1" 200 None get_instance_ip instance_name=node1 http://localhost:None "GET /v1.46/containers/roottestkeeperbrokenlogs-gw9-node1-1/json HTTP/1.1" 200 None Waiting until keeper can create sessions on 172.16.3.2:9181 (timeout=30.000000) Connecting to 172.16.3.2(172.16.3.2):9181, use_ssl: False Sending request(xid=None): Connect(protocol_version=0, last_zxid_seen=0, time_out=29993, session_id=0, passwd=b'\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00', read_only=None) Zookeeper connection established, state: CONNECTED Sending request(xid=1): GetData(path='/keeper/api_version', watcher=None) Received response(xid=1): (b'2', ZnodeStat(czxid=0, mzxid=0, ctime=0, mtime=0, version=0, cversion=0, aversion=0, ephemeralOwner=0, dataLength=1, numChildren=0, pzxid=0)) Sending request(xid=2): Close() Connection dropped: socket connection broken Transition to CONNECTING Zookeeper connection lost Failed connecting to Zookeeper within the connection retry policy. Zookeeper session closed, state: CLOSED get_instance_ip instance_name=node1 http://localhost:None "GET /v1.46/containers/roottestkeeperbrokenlogs-gw9-node1-1/json HTTP/1.1" 200 None Connecting to 172.16.3.2(172.16.3.2):9181, use_ssl: False Sending request(xid=None): Connect(protocol_version=0, last_zxid_seen=0, time_out=30000, session_id=0, passwd=b'\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00', read_only=None) Zookeeper connection established, state: CONNECTED Sending request(xid=1): Create(path='/test_broken_log_final_node', data=b'somedata1', acl=[ACL(perms=31, acl_list=['ALL'], id=Id(scheme='world', id='anyone'))], flags=0) Received response(xid=1): '/test_broken_log_final_node' Sending request(xid=2): GetChildren(path='/test_broken_log', watcher=None) Received response(xid=2): ['node0000000001', 'node0000000003', 'node0000000000', 'node0000000002', 'node0000000008', 'node0000000009', 'node0000000004', 'node0000000006', 'node0000000005', 'node0000000007'] Sending request(xid=3): GetData(path='/test_broken_log/node0000000001', watcher=None) Received response(xid=3): (b'somedata1', ZnodeStat(czxid=6, mzxid=6, ctime=1743560879544, mtime=1743560879544, version=0, cversion=0, aversion=0, ephemeralOwner=0, dataLength=9, numChildren=0, pzxid=6)) Sending request(xid=4): GetData(path='/test_broken_log/node0000000003', watcher=None) Received response(xid=4): (b'somedata1', ZnodeStat(czxid=8, mzxid=8, ctime=1743560879551, mtime=1743560879551, version=0, cversion=0, aversion=0, ephemeralOwner=0, dataLength=9, numChildren=0, pzxid=8)) Sending request(xid=5): GetData(path='/test_broken_log/node0000000000', watcher=None) Received response(xid=5): (b'somedata1', ZnodeStat(czxid=5, mzxid=5, ctime=1743560879540, mtime=1743560879540, version=0, cversion=0, aversion=0, ephemeralOwner=0, dataLength=9, numChildren=0, pzxid=5)) Sending request(xid=6): GetData(path='/test_broken_log/node0000000002', watcher=None) Received response(xid=6): (b'somedata1', ZnodeStat(czxid=7, mzxid=7, ctime=1743560879547, mtime=1743560879547, version=0, cversion=0, aversion=0, ephemeralOwner=0, dataLength=9, numChildren=0, pzxid=7)) Sending request(xid=7): GetData(path='/test_broken_log/node0000000008', watcher=None) Received response(xid=7): (b'somedata1', ZnodeStat(czxid=13, mzxid=13, ctime=1743560879572, mtime=1743560879572, version=0, cversion=0, aversion=0, ephemeralOwner=0, dataLength=9, numChildren=0, pzxid=13)) Sending request(xid=8): GetData(path='/test_broken_log/node0000000009', watcher=None) Received response(xid=8): (b'somedata1', ZnodeStat(czxid=14, mzxid=14, ctime=1743560879575, mtime=1743560879575, version=0, cversion=0, aversion=0, ephemeralOwner=0, dataLength=9, numChildren=0, pzxid=14)) Sending request(xid=9): GetData(path='/test_broken_log/node0000000004', watcher=None) Received response(xid=9): (b'somedata1', ZnodeStat(czxid=9, mzxid=9, ctime=1743560879554, mtime=1743560879554, version=0, cversion=0, aversion=0, ephemeralOwner=0, dataLength=9, numChildren=0, pzxid=9)) Sending request(xid=10): GetData(path='/test_broken_log/node0000000006', watcher=None) Received response(xid=10): (b'somedata1', ZnodeStat(czxid=11, mzxid=11, ctime=1743560879564, mtime=1743560879564, version=0, cversion=0, aversion=0, ephemeralOwner=0, dataLength=9, numChildren=0, pzxid=11)) Sending request(xid=11): GetData(path='/test_broken_log/node0000000005', watcher=None) Received response(xid=11): (b'somedata1', ZnodeStat(czxid=10, mzxid=10, ctime=1743560879560, mtime=1743560879560, version=0, cversion=0, aversion=0, ephemeralOwner=0, dataLength=9, numChildren=0, pzxid=10)) Sending request(xid=12): GetData(path='/test_broken_log/node0000000007', watcher=None) Received response(xid=12): (b'somedata1', ZnodeStat(czxid=12, mzxid=12, ctime=1743560879568, mtime=1743560879568, version=0, cversion=0, aversion=0, ephemeralOwner=0, dataLength=9, numChildren=0, pzxid=12)) Sending request(xid=13): GetData(path='/test_broken_log_final_node', watcher=None) Received response(xid=13): (b'somedata1', ZnodeStat(czxid=17, mzxid=17, ctime=1743560887927, mtime=1743560887927, version=0, cversion=0, aversion=0, ephemeralOwner=0, dataLength=9, numChildren=0, pzxid=17)) get_instance_ip instance_name=node2 http://localhost:None "GET /v1.46/containers/roottestkeeperbrokenlogs-gw9-node2-1/json HTTP/1.1" 200 None Connecting to 172.16.3.4(172.16.3.4):9181, use_ssl: False Sending request(xid=None): Connect(protocol_version=0, last_zxid_seen=0, time_out=30000, session_id=0, passwd=b'\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00', read_only=None) Zookeeper connection established, state: CONNECTED Sending request(xid=1): GetChildren(path='/test_broken_log', watcher=None) Received response(xid=1): ['node0000000001', 'node0000000003', 'node0000000000', 'node0000000002', 'node0000000008', 'node0000000009', 'node0000000004', 'node0000000006', 'node0000000005', 'node0000000007'] Sending request(xid=2): GetData(path='/test_broken_log/node0000000001', watcher=None) Received response(xid=2): (b'somedata1', ZnodeStat(czxid=6, mzxid=6, ctime=1743560879544, mtime=1743560879544, version=0, cversion=0, aversion=0, ephemeralOwner=0, dataLength=9, numChildren=0, pzxid=6)) Sending request(xid=3): GetData(path='/test_broken_log/node0000000003', watcher=None) Received response(xid=3): (b'somedata1', ZnodeStat(czxid=8, mzxid=8, ctime=1743560879551, mtime=1743560879551, version=0, cversion=0, aversion=0, ephemeralOwner=0, dataLength=9, numChildren=0, pzxid=8)) Sending request(xid=4): GetData(path='/test_broken_log/node0000000000', watcher=None) Received response(xid=4): (b'somedata1', ZnodeStat(czxid=5, mzxid=5, ctime=1743560879540, mtime=1743560879540, version=0, cversion=0, aversion=0, ephemeralOwner=0, dataLength=9, numChildren=0, pzxid=5)) Sending request(xid=5): GetData(path='/test_broken_log/node0000000002', watcher=None) Received response(xid=5): (b'somedata1', ZnodeStat(czxid=7, mzxid=7, ctime=1743560879547, mtime=1743560879547, version=0, cversion=0, aversion=0, ephemeralOwner=0, dataLength=9, numChildren=0, pzxid=7)) Sending request(xid=6): GetData(path='/test_broken_log/node0000000008', watcher=None) Received response(xid=6): (b'somedata1', ZnodeStat(czxid=13, mzxid=13, ctime=1743560879572, mtime=1743560879572, version=0, cversion=0, aversion=0, ephemeralOwner=0, dataLength=9, numChildren=0, pzxid=13)) Sending request(xid=7): GetData(path='/test_broken_log/node0000000009', watcher=None) Received response(xid=7): (b'somedata1', ZnodeStat(czxid=14, mzxid=14, ctime=1743560879575, mtime=1743560879575, version=0, cversion=0, aversion=0, ephemeralOwner=0, dataLength=9, numChildren=0, pzxid=14)) Sending request(xid=8): GetData(path='/test_broken_log/node0000000004', watcher=None) Received response(xid=8): (b'somedata1', ZnodeStat(czxid=9, mzxid=9, ctime=1743560879554, mtime=1743560879554, version=0, cversion=0, aversion=0, ephemeralOwner=0, dataLength=9, numChildren=0, pzxid=9)) Sending request(xid=9): GetData(path='/test_broken_log/node0000000006', watcher=None) Received response(xid=9): (b'somedata1', ZnodeStat(czxid=11, mzxid=11, ctime=1743560879564, mtime=1743560879564, version=0, cversion=0, aversion=0, ephemeralOwner=0, dataLength=9, numChildren=0, pzxid=11)) Sending request(xid=10): GetData(path='/test_broken_log/node0000000005', watcher=None) Received response(xid=10): (b'somedata1', ZnodeStat(czxid=10, mzxid=10, ctime=1743560879560, mtime=1743560879560, version=0, cversion=0, aversion=0, ephemeralOwner=0, dataLength=9, numChildren=0, pzxid=10)) Sending request(xid=11): GetData(path='/test_broken_log/node0000000007', watcher=None) Received response(xid=11): (b'somedata1', ZnodeStat(czxid=12, mzxid=12, ctime=1743560879568, mtime=1743560879568, version=0, cversion=0, aversion=0, ephemeralOwner=0, dataLength=9, numChildren=0, pzxid=12)) Sending request(xid=12): GetData(path='/test_broken_log_final_node', watcher=None) Received response(xid=12): (b'somedata1', ZnodeStat(czxid=17, mzxid=17, ctime=1743560887927, mtime=1743560887927, version=0, cversion=0, aversion=0, ephemeralOwner=0, dataLength=9, numChildren=0, pzxid=17)) get_instance_ip instance_name=node2 http://localhost:None "GET /v1.46/containers/roottestkeeperbrokenlogs-gw9-node2-1/json HTTP/1.1" 200 None Connecting to 172.16.3.4(172.16.3.4):9181, use_ssl: False Sending request(xid=None): Connect(protocol_version=0, last_zxid_seen=0, time_out=30000, session_id=0, passwd=b'\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00', read_only=None) Zookeeper connection established, state: CONNECTED Sending request(xid=1): GetChildren(path='/test_broken_log', watcher=None) Received response(xid=1): ['node0000000001', 'node0000000003', 'node0000000000', 'node0000000002', 'node0000000008', 'node0000000009', 'node0000000004', 'node0000000006', 'node0000000005', 'node0000000007'] Sending request(xid=2): GetData(path='/test_broken_log/node0000000001', watcher=None) Received response(xid=2): (b'somedata1', ZnodeStat(czxid=6, mzxid=6, ctime=1743560879544, mtime=1743560879544, version=0, cversion=0, aversion=0, ephemeralOwner=0, dataLength=9, numChildren=0, pzxid=6)) Sending request(xid=3): GetData(path='/test_broken_log/node0000000003', watcher=None) Received response(xid=3): (b'somedata1', ZnodeStat(czxid=8, mzxid=8, ctime=1743560879551, mtime=1743560879551, version=0, cversion=0, aversion=0, ephemeralOwner=0, dataLength=9, numChildren=0, pzxid=8)) Sending request(xid=4): GetData(path='/test_broken_log/node0000000000', watcher=None) Received response(xid=4): (b'somedata1', ZnodeStat(czxid=5, mzxid=5, ctime=1743560879540, mtime=1743560879540, version=0, cversion=0, aversion=0, ephemeralOwner=0, dataLength=9, numChildren=0, pzxid=5)) Sending request(xid=5): GetData(path='/test_broken_log/node0000000002', watcher=None) Received response(xid=5): (b'somedata1', ZnodeStat(czxid=7, mzxid=7, ctime=1743560879547, mtime=1743560879547, version=0, cversion=0, aversion=0, ephemeralOwner=0, dataLength=9, numChildren=0, pzxid=7)) Sending request(xid=6): GetData(path='/test_broken_log/node0000000008', watcher=None) Received response(xid=6): (b'somedata1', ZnodeStat(czxid=13, mzxid=13, ctime=1743560879572, mtime=1743560879572, version=0, cversion=0, aversion=0, ephemeralOwner=0, dataLength=9, numChildren=0, pzxid=13)) Sending request(xid=7): GetData(path='/test_broken_log/node0000000009', watcher=None) Received response(xid=7): (b'somedata1', ZnodeStat(czxid=14, mzxid=14, ctime=1743560879575, mtime=1743560879575, version=0, cversion=0, aversion=0, ephemeralOwner=0, dataLength=9, numChildren=0, pzxid=14)) Sending request(xid=8): GetData(path='/test_broken_log/node0000000004', watcher=None) Received response(xid=8): (b'somedata1', ZnodeStat(czxid=9, mzxid=9, ctime=1743560879554, mtime=1743560879554, version=0, cversion=0, aversion=0, ephemeralOwner=0, dataLength=9, numChildren=0, pzxid=9)) Sending request(xid=9): GetData(path='/test_broken_log/node0000000006', watcher=None) Received response(xid=9): (b'somedata1', ZnodeStat(czxid=11, mzxid=11, ctime=1743560879564, mtime=1743560879564, version=0, cversion=0, aversion=0, ephemeralOwner=0, dataLength=9, numChildren=0, pzxid=11)) Sending request(xid=10): GetData(path='/test_broken_log/node0000000005', watcher=None) Received response(xid=10): (b'somedata1', ZnodeStat(czxid=10, mzxid=10, ctime=1743560879560, mtime=1743560879560, version=0, cversion=0, aversion=0, ephemeralOwner=0, dataLength=9, numChildren=0, pzxid=10)) Sending request(xid=11): GetData(path='/test_broken_log/node0000000007', watcher=None) [gw0] PASSED test_insert_distributed_async_send/test.py::test_insert_distributed_async_send_truncated_2[0] Received response(xid=11): (b'somedata1', ZnodeStat(czxid=12, mzxid=12, ctime=1743560879568, mtime=1743560879568, version=0, cversion=0, aversion=0, ephemeralOwner=0, dataLength=9, numChildren=0, pzxid=12)) test_insert_distributed_async_send/test.py::test_insert_distributed_async_send_truncated_2[1] Sending request(xid=12): GetData(path='/test_broken_log_final_node', watcher=None) Executing query DROP TABLE IF EXISTS data on n1 Received response(xid=12): (b'somedata1', ZnodeStat(czxid=17, mzxid=17, ctime=1743560887927, mtime=1743560887927, version=0, cversion=0, aversion=0, ephemeralOwner=0, dataLength=9, numChildren=0, pzxid=17)) run container_id:roottestkeeperbrokenlogs-gw9-node1-1 detach:False nothrow:False cmd: ['ls', '/var/lib/clickhouse/coordination/log'] Command:[docker exec roottestkeeperbrokenlogs-gw9-node1-1 ls /var/lib/clickhouse/coordination/log] Stdout:changelog_1_100000.bin Stdout:changelog_20_100019.bin run container_id:roottestkeeperbrokenlogs-gw9-node2-1 detach:False nothrow:False cmd: ['ls', '/var/lib/clickhouse/coordination/log'] Command:[docker exec roottestkeeperbrokenlogs-gw9-node2-1 ls /var/lib/clickhouse/coordination/log] Stdout:changelog_1_100000.bin run container_id:roottestkeeperbrokenlogs-gw9-node3-1 detach:False nothrow:False cmd: ['ls', '/var/lib/clickhouse/coordination/log'] Command:[docker exec roottestkeeperbrokenlogs-gw9-node3-1 ls /var/lib/clickhouse/coordination/log] Stdout:changelog_1_100000.bin Sending request(xid=14): Close() Connection dropped: socket connection broken Transition to CONNECTING Zookeeper connection lost Executing query DROP TABLE IF EXISTS dist on n1 Failed connecting to Zookeeper within the connection retry policy. Zookeeper session closed, state: CLOSED Sending request(xid=13): Close() Connection dropped: socket connection broken Transition to CONNECTING Zookeeper connection lost Failed connecting to Zookeeper within the connection retry policy. Zookeeper session closed, state: CLOSED Sending request(xid=13): Close() Connection dropped: socket connection broken Transition to CONNECTING Zookeeper connection lost Executing query select 20 on node1 Failed connecting to Zookeeper within the connection retry policy. Zookeeper session closed, state: CLOSED Command:[docker compose --env-file /ClickHouse/tests/integration/test_keeper_broken_logs/_instances-0-gw9/.env --project-name roottestkeeperbrokenlogs-gw9 --file /ClickHouse/tests/integration/test_keeper_broken_logs/_instances-0-gw9/node1/docker-compose.yml --file /ClickHouse/tests/integration/test_keeper_broken_logs/_instances-0-gw9/node2/docker-compose.yml --file /ClickHouse/tests/integration/test_keeper_broken_logs/_instances-0-gw9/node3/docker-compose.yml stop --timeout 20] [gw9] PASSED test_keeper_broken_logs/test.py::test_single_node_broken_log Executing query DROP TABLE IF EXISTS data on n2 Executing query DROP TABLE IF EXISTS dist on n2 Executing query DROP TABLE IF EXISTS data on n3 Executing query select 20 on node1 Stderr: Container roottestkeeperbrokenlogs-gw9-node1-1 Stopping Stderr: Container roottestkeeperbrokenlogs-gw9-node3-1 Stopping Stderr: Container roottestkeeperbrokenlogs-gw9-node2-1 Stopping Stderr: Container roottestkeeperbrokenlogs-gw9-node1-1 Stopped Stderr: Container roottestkeeperbrokenlogs-gw9-node3-1 Stopped Stderr: Container roottestkeeperbrokenlogs-gw9-node2-1 Stopped Command:[bash -c [ -f /ClickHouse/tests/integration/test_keeper_broken_logs/_instances-0-gw9/node1/logs/stderr.log ] && zgrep -aH "==================" /ClickHouse/tests/integration/test_keeper_broken_logs/_instances-0-gw9/node1/logs/stderr.log* | ( [ -z "" ] && cat || grep -v "$" ) || true] Command:[bash -c [ -f /ClickHouse/tests/integration/test_keeper_broken_logs/_instances-0-gw9/node2/logs/stderr.log ] && zgrep -aH "==================" /ClickHouse/tests/integration/test_keeper_broken_logs/_instances-0-gw9/node2/logs/stderr.log* | ( [ -z "" ] && cat || grep -v "$" ) || true] Command:[bash -c [ -f /ClickHouse/tests/integration/test_keeper_broken_logs/_instances-0-gw9/node3/logs/stderr.log ] && zgrep -aH "==================" /ClickHouse/tests/integration/test_keeper_broken_logs/_instances-0-gw9/node3/logs/stderr.log* | ( [ -z "" ] && cat || grep -v "$" ) || true] Command:[docker compose --env-file /ClickHouse/tests/integration/test_keeper_broken_logs/_instances-0-gw9/.env --project-name roottestkeeperbrokenlogs-gw9 --file /ClickHouse/tests/integration/test_keeper_broken_logs/_instances-0-gw9/node1/docker-compose.yml --file /ClickHouse/tests/integration/test_keeper_broken_logs/_instances-0-gw9/node2/docker-compose.yml --file /ClickHouse/tests/integration/test_keeper_broken_logs/_instances-0-gw9/node3/docker-compose.yml down --volumes] Executing query DROP TABLE IF EXISTS dist on n3 Connecting to 172.16.2.4(172.16.2.4):2181, use_ssl: False Sending request(xid=None): Connect(protocol_version=0, last_zxid_seen=0, time_out=30000, session_id=0, passwd=b'\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00', read_only=None) Zookeeper connection established, state: CONNECTED Sending request(xid=1): GetChildren(path='/', watcher=None) Received response(xid=1): ['keeper'] Sending request(xid=2): Close() Connection dropped: socket connection broken Transition to CONNECTING Zookeeper connection lost Failed connecting to Zookeeper within the connection retry policy. Zookeeper session closed, state: CLOSED get_instance_ip instance_name=zoo2 http://localhost:None "GET /v1.46/containers/roottestkeeperavailabilityzone-gw4-zoo2-1/json HTTP/1.1" 200 None get_kazoo_client: zoo2, ip:172.16.2.3, port:2181, use_ssl:False Connecting to 172.16.2.3(172.16.2.3):2181, use_ssl: False Sending request(xid=None): Connect(protocol_version=0, last_zxid_seen=0, time_out=30000, session_id=0, passwd=b'\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00', read_only=None) Zookeeper connection established, state: CONNECTED Sending request(xid=1): GetChildren(path='/', watcher=None) Received response(xid=1): ['keeper'] Sending request(xid=2): Close() Connection dropped: socket connection broken Transition to CONNECTING Zookeeper connection lost Executing query DROP TABLE IF EXISTS data on n4 Failed connecting to Zookeeper within the connection retry policy. Zookeeper session closed, state: CLOSED get_instance_ip instance_name=zoo3 http://localhost:None "GET /v1.46/containers/roottestkeeperavailabilityzone-gw4-zoo3-1/json HTTP/1.1" 200 None get_kazoo_client: zoo3, ip:172.16.2.2, port:2181, use_ssl:False Connecting to 172.16.2.2(172.16.2.2):2181, use_ssl: False Sending request(xid=None): Connect(protocol_version=0, last_zxid_seen=0, time_out=30000, session_id=0, passwd=b'\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00', read_only=None) Zookeeper connection established, state: CONNECTED Sending request(xid=1): GetChildren(path='/', watcher=None) Received response(xid=1): ['keeper'] Sending request(xid=2): Close() Connection dropped: socket connection broken Transition to CONNECTING Zookeeper connection lost Executing query DROP TABLE IF EXISTS dist on n4 Failed connecting to Zookeeper within the connection retry policy. Zookeeper session closed, state: CLOSED All instances of ZooKeeper started: ('zoo1', 'zoo2', 'zoo3') ('Trying to create ClickHouse instance by command %s', 'docker compose --env-file /ClickHouse/tests/integration/test_keeper_availability_zone/_instances-0-gw4/.env --project-name roottestkeeperavailabilityzone-gw4 --file /ClickHouse/tests/integration/test_keeper_availability_zone/_instances-0-gw4/node/docker-compose.yml --file /ClickHouse/tests/integration/helpers/../../../tests/integration/compose/docker_compose_keeper.yml up -d --no-recreate') Command:[docker compose --env-file /ClickHouse/tests/integration/test_keeper_availability_zone/_instances-0-gw4/.env --project-name roottestkeeperavailabilityzone-gw4 --file /ClickHouse/tests/integration/test_keeper_availability_zone/_instances-0-gw4/node/docker-compose.yml --file /ClickHouse/tests/integration/helpers/../../../tests/integration/compose/docker_compose_keeper.yml up -d --no-recreate] Stderr: Container roottestkeeperbrokenlogs-gw9-node1-1 Stopping Stderr: Container roottestkeeperbrokenlogs-gw9-node3-1 Stopping Stderr: Container roottestkeeperbrokenlogs-gw9-node2-1 Stopping Stderr: Container roottestkeeperbrokenlogs-gw9-node3-1 Stopped Stderr: Container roottestkeeperbrokenlogs-gw9-node3-1 Removing Stderr: Container roottestkeeperbrokenlogs-gw9-node1-1 Stopped Stderr: Container roottestkeeperbrokenlogs-gw9-node1-1 Removing Stderr: Container roottestkeeperbrokenlogs-gw9-node2-1 Stopped Stderr: Container roottestkeeperbrokenlogs-gw9-node2-1 Removing Stderr: Container roottestkeeperbrokenlogs-gw9-node3-1 Removed Stderr: Container roottestkeeperbrokenlogs-gw9-node2-1 Removed Stderr: Container roottestkeeperbrokenlogs-gw9-node1-1 Removed Stderr: Network roottestkeeperbrokenlogs-gw9_default Removing Stderr: Network roottestkeeperbrokenlogs-gw9_default Removed Cleanup called Executing query select 20 on node1 Docker networks for project roottestkeeperbrokenlogs-gw9 are NETWORK ID NAME DRIVER SCOPE Executing query CREATE TABLE data (key Int, value String) Engine=MergeTree() ORDER BY key on n1 Docker containers for project roottestkeeperbrokenlogs-gw9 are CONTAINER ID IMAGE COMMAND CREATED STATUS PORTS NAMES Docker volumes for project roottestkeeperbrokenlogs-gw9 are DRIVER VOLUME NAME Command:[docker container list --all --filter name='^/roottestkeeperbrokenlogs-gw9-.*-1$' --format '{{.ID}}:{{.Names}}'] Unstopped containers: {} No running containers for project: roottestkeeperbrokenlogs-gw9 Trying to prune unused networks... Trying to prune unused images... Command:[docker image prune -f] Stdout:Total reclaimed space: 0B Images pruned Trying to prune unused volumes... Command:[docker volume ls | wc -l] Stdout:3 Command:[docker volume prune -f] Stdout:Total reclaimed space: 0B Volumes pruned: 3 Stderr: Container roottestkeeperavailabilityzone-gw4-zoo1-1 Running Stderr: Container roottestkeeperavailabilityzone-gw4-zoo3-1 Running Stderr: Container roottestkeeperavailabilityzone-gw4-zoo2-1 Running Stderr: Container roottestkeeperavailabilityzone-gw4-node-1 Creating Stderr: Container roottestkeeperavailabilityzone-gw4-node-1 Created Stderr: Container roottestkeeperavailabilityzone-gw4-node-1 Starting Stderr: Container roottestkeeperavailabilityzone-gw4-node-1 Started ClickHouse instance created get_instance_ip instance_name=node http://localhost:None "GET /v1.46/containers/roottestkeeperavailabilityzone-gw4-node-1/json HTTP/1.1" 200 None get_instance_ip instance_name=node http://localhost:None "GET /v1.46/containers/roottestkeeperavailabilityzone-gw4-node-1/json HTTP/1.1" 200 None Waiting for ClickHouse start in node, ip: 172.16.2.5... http://localhost:None "GET /v1.46/containers/roottestkeeperavailabilityzone-gw4-node-1/json HTTP/1.1" 200 None http://localhost:None "GET /v1.46/containers/41b460071f60e986733336be4b31e306b57f558ef54d9aab81dd0d739796d034/json HTTP/1.1" 200 None Executing query CREATE TABLE dist AS data Engine=Distributed( insert_distributed_async_send_cluster_two_replicas, currentDatabase(), data, key ) on n1 http://localhost:None "GET /v1.46/containers/41b460071f60e986733336be4b31e306b57f558ef54d9aab81dd0d739796d034/json HTTP/1.1" 200 None http://localhost:None "GET /v1.46/containers/41b460071f60e986733336be4b31e306b57f558ef54d9aab81dd0d739796d034/json HTTP/1.1" 200 None Executing query SYSTEM STOP DISTRIBUTED SENDS dist on n1 http://localhost:None "GET /v1.46/containers/41b460071f60e986733336be4b31e306b57f558ef54d9aab81dd0d739796d034/json HTTP/1.1" 200 None http://localhost:None "GET /v1.46/containers/41b460071f60e986733336be4b31e306b57f558ef54d9aab81dd0d739796d034/json HTTP/1.1" 200 None http://localhost:None "GET /v1.46/containers/41b460071f60e986733336be4b31e306b57f558ef54d9aab81dd0d739796d034/json HTTP/1.1" 200 None Executing query CREATE TABLE data (key Int, value String) Engine=MergeTree() ORDER BY key on n2 Executing query select 20 on node1 http://localhost:None "GET /v1.46/containers/41b460071f60e986733336be4b31e306b57f558ef54d9aab81dd0d739796d034/json HTTP/1.1" 200 None http://localhost:None "GET /v1.46/containers/41b460071f60e986733336be4b31e306b57f558ef54d9aab81dd0d739796d034/json HTTP/1.1" 200 None Executing query CREATE TABLE dist AS data Engine=Distributed( insert_distributed_async_send_cluster_two_replicas, currentDatabase(), data, key ) on n2 http://localhost:None "GET /v1.46/containers/41b460071f60e986733336be4b31e306b57f558ef54d9aab81dd0d739796d034/json HTTP/1.1" 200 None http://localhost:None "GET /v1.46/containers/41b460071f60e986733336be4b31e306b57f558ef54d9aab81dd0d739796d034/json HTTP/1.1" 200 None Executing query SYSTEM STOP DISTRIBUTED SENDS dist on n2 http://localhost:None "GET /v1.46/containers/41b460071f60e986733336be4b31e306b57f558ef54d9aab81dd0d739796d034/json HTTP/1.1" 200 None Executing query CREATE TABLE data (key Int, value String) Engine=MergeTree() ORDER BY key on n3 http://localhost:None "GET /v1.46/containers/41b460071f60e986733336be4b31e306b57f558ef54d9aab81dd0d739796d034/json HTTP/1.1" 200 None run container_id:roottestkeeperincorrectconfig-gw7-node1-1 detach:False nothrow:False cmd: ['bash', '-c', "ps ax | grep 'clickhouse' | grep -v 'grep' | grep -v 'coproc' | grep -v 'bash -c' | awk '{print $1}'"] Command:[docker exec roottestkeeperincorrectconfig-gw7-node1-1 bash -c ps ax | grep 'clickhouse' | grep -v 'grep' | grep -v 'coproc' | grep -v 'bash -c' | awk '{print $1}'] Current start attempt failed. Will kill 2720 just in case. run container_id:roottestkeeperincorrectconfig-gw7-node1-1 detach:False nothrow:True cmd: ['bash', '-c', 'kill -9 2720'] Command:[docker exec -u root roottestkeeperincorrectconfig-gw7-node1-1 bash -c kill -9 2720] http://localhost:None "GET /v1.46/containers/41b460071f60e986733336be4b31e306b57f558ef54d9aab81dd0d739796d034/json HTTP/1.1" 200 None Stderr:bash: line 1: kill: (2720) - No such process Exitcode:1 Executing query CREATE TABLE dist AS data Engine=Distributed( insert_distributed_async_send_cluster_two_replicas, currentDatabase(), data, key ) on n3 http://localhost:None "GET /v1.46/containers/41b460071f60e986733336be4b31e306b57f558ef54d9aab81dd0d739796d034/json HTTP/1.1" 200 None ClickHouse node started get_instance_ip instance_name=zoo1 http://localhost:None "GET /v1.46/containers/roottestkeeperavailabilityzone-gw4-zoo1-1/json HTTP/1.1" 200 None get_instance_ip instance_name=zoo2 http://localhost:None "GET /v1.46/containers/roottestkeeperavailabilityzone-gw4-zoo2-1/json HTTP/1.1" 200 None Executing query SYSTEM STOP DISTRIBUTED SENDS dist on n3 get_instance_ip instance_name=zoo3 http://localhost:None "GET /v1.46/containers/roottestkeeperavailabilityzone-gw4-zoo3-1/json HTTP/1.1" 200 None Executing query CREATE TABLE data (key Int, value String) Engine=MergeTree() ORDER BY key on n4 Command:[docker compose --env-file /ClickHouse/tests/integration/test_keeper_availability_zone/_instances-0-gw4/.env --project-name roottestkeeperavailabilityzone-gw4 --file /ClickHouse/tests/integration/test_keeper_availability_zone/_instances-0-gw4/node/docker-compose.yml --file /ClickHouse/tests/integration/helpers/../../../tests/integration/compose/docker_compose_keeper.yml stop --timeout 20] [gw4] PASSED test_keeper_availability_zone/test.py::test_get_availability_zone run container_id:roottestkeeperincorrectconfig-gw7-node1-1 detach:False nothrow:False cmd: ['bash', '-c', "echo '\n\n \n 9181\n 1\n /var/lib/clickhouse/coordination/log\n /var/lib/clickhouse/coordination/snapshots\n\n \n 5000\n 10000\n trace\n \n\n true\n \n \n 1\n localhost\n 9234\n \n \n 2\n 127.0.0.2\n 9234\n \n \n \n\n' > /etc/clickhouse-server/config.d/enable_keeper1.xml"] Command:[docker exec roottestkeeperincorrectconfig-gw7-node1-1 bash -c echo ' 9181 1 /var/lib/clickhouse/coordination/log /var/lib/clickhouse/coordination/snapshots 5000 10000 trace true 1 localhost 9234 2 127.0.0.2 9234 ' > /etc/clickhouse-server/config.d/enable_keeper1.xml] run container_id:roottestkeeperincorrectconfig-gw7-node1-1 detach:False nothrow:False cmd: ['bash', '-c', "ps ax | grep 'clickhouse' | grep -v 'grep' | grep -v 'coproc' | grep -v 'bash -c' | awk '{print $1}'"] Command:[docker exec roottestkeeperincorrectconfig-gw7-node1-1 bash -c ps ax | grep 'clickhouse' | grep -v 'grep' | grep -v 'coproc' | grep -v 'bash -c' | awk '{print $1}'] No clickhouse process running. Start new one. http://localhost:None "POST /v1.46/containers/roottestkeeperincorrectconfig-gw7-node1-1/exec HTTP/1.1" 201 74 Executing query CREATE TABLE dist AS data Engine=Distributed( insert_distributed_async_send_cluster_two_replicas, currentDatabase(), data, key ) on n4 http://localhost:None "POST /v1.46/exec/e4e450954fc6560fe7f8037bfbb8d8b6c3aa3e5cf43bb6281a89daa21fc03a19/start HTTP/1.1" 200 0 http://localhost:None "GET /v1.46/exec/e4e450954fc6560fe7f8037bfbb8d8b6c3aa3e5cf43bb6281a89daa21fc03a19/json HTTP/1.1" 200 586 Executing query SYSTEM STOP DISTRIBUTED SENDS dist on n4 Executing query INSERT INTO dist SELECT number, randomPrintableASCII(100) FROM numbers(10000) on n1 run container_id:roottestinsertdistributedasyncsend-gw0-n1-1 detach:False nothrow:False cmd: ['bash', '-c', 'wc -c < /var/lib/clickhouse/data/default/dist/shard1_replica2/2.bin'] Command:[docker exec roottestinsertdistributedasyncsend-gw0-n1-1 bash -c wc -c < /var/lib/clickhouse/data/default/dist/shard1_replica2/2.bin] Stdout:1054688 run container_id:roottestinsertdistributedasyncsend-gw0-n1-1 detach:False nothrow:False cmd: ['bash', '-c', 'mv /var/lib/clickhouse/data/default/dist/shard1_replica2/2.bin /tmp/bin && head -c 10000 /tmp/bin > /var/lib/clickhouse/data/default/dist/shard1_replica2/2.bin'] Command:[docker exec roottestinsertdistributedasyncsend-gw0-n1-1 bash -c mv /var/lib/clickhouse/data/default/dist/shard1_replica2/2.bin /tmp/bin && head -c 10000 /tmp/bin > /var/lib/clickhouse/data/default/dist/shard1_replica2/2.bin] Executing query SYSTEM FLUSH DISTRIBUTED dist on n1 run container_id:roottestkeeperincorrectconfig-gw7-node1-1 detach:False nothrow:False cmd: ['bash', '-c', "ps ax | grep 'clickhouse' | grep -v 'grep' | grep -v 'coproc' | grep -v 'bash -c' | awk '{print $1}'"] Command:[docker exec roottestkeeperincorrectconfig-gw7-node1-1 bash -c ps ax | grep 'clickhouse' | grep -v 'grep' | grep -v 'coproc' | grep -v 'bash -c' | awk '{print $1}'] Stdout:3364 Clickhouse process running. run container_id:roottestkeeperincorrectconfig-gw7-node1-1 detach:False nothrow:False cmd: ['bash', '-c', "ps ax | grep 'clickhouse' | grep -v 'grep' | grep -v 'coproc' | grep -v 'bash -c' | awk '{print $1}'"] Command:[docker exec roottestkeeperincorrectconfig-gw7-node1-1 bash -c ps ax | grep 'clickhouse' | grep -v 'grep' | grep -v 'coproc' | grep -v 'bash -c' | awk '{print $1}'] Stdout:3364 Executing query select 20 on node1 Stderr: Container roottestkeeperavailabilityzone-gw4-node-1 Stopping Stderr: Container roottestkeeperavailabilityzone-gw4-node-1 Stopped Stderr: Container roottestkeeperavailabilityzone-gw4-zoo3-1 Stopping Stderr: Container roottestkeeperavailabilityzone-gw4-zoo2-1 Stopping Stderr: Container roottestkeeperavailabilityzone-gw4-zoo1-1 Stopping Stderr: Container roottestkeeperavailabilityzone-gw4-zoo2-1 Stopped Stderr: Container roottestkeeperavailabilityzone-gw4-zoo1-1 Stopped Stderr: Container roottestkeeperavailabilityzone-gw4-zoo3-1 Stopped Command:[bash -c [ -f /ClickHouse/tests/integration/test_keeper_availability_zone/_instances-0-gw4/node/logs/stderr.log ] && zgrep -aH "==================" /ClickHouse/tests/integration/test_keeper_availability_zone/_instances-0-gw4/node/logs/stderr.log* | ( [ -z "" ] && cat || grep -v "$" ) || true] Command:[docker compose --env-file /ClickHouse/tests/integration/test_keeper_availability_zone/_instances-0-gw4/.env --project-name roottestkeeperavailabilityzone-gw4 --file /ClickHouse/tests/integration/test_keeper_availability_zone/_instances-0-gw4/node/docker-compose.yml --file /ClickHouse/tests/integration/helpers/../../../tests/integration/compose/docker_compose_keeper.yml down --volumes] Executing query SYSTEM FLUSH DISTRIBUTED dist on n1 run container_id:roottestinsertdistributedasyncsend-gw0-n1-1 detach:False nothrow:False cmd: ['bash', '-c', 'ls /var/lib/clickhouse/data/default/dist/shard1_replica2/broken/2.bin'] Command:[docker exec roottestinsertdistributedasyncsend-gw0-n1-1 bash -c ls /var/lib/clickhouse/data/default/dist/shard1_replica2/broken/2.bin] Stdout:/var/lib/clickhouse/data/default/dist/shard1_replica2/broken/2.bin Executing query SELECT count() FROM data on n1 Executing query SELECT count() FROM data on n2 Stderr: Container roottestkeeperavailabilityzone-gw4-node-1 Stopping Stderr: Container roottestkeeperavailabilityzone-gw4-node-1 Stopped Stderr: Container roottestkeeperavailabilityzone-gw4-node-1 Removing Stderr: Container roottestkeeperavailabilityzone-gw4-node-1 Removed Stderr: Container roottestkeeperavailabilityzone-gw4-zoo2-1 Stopping Stderr: Container roottestkeeperavailabilityzone-gw4-zoo3-1 Stopping Stderr: Container roottestkeeperavailabilityzone-gw4-zoo1-1 Stopping Stderr: Container roottestkeeperavailabilityzone-gw4-zoo2-1 Stopped Stderr: Container roottestkeeperavailabilityzone-gw4-zoo2-1 Removing Stderr: Container roottestkeeperavailabilityzone-gw4-zoo1-1 Stopped Stderr: Container roottestkeeperavailabilityzone-gw4-zoo1-1 Removing Stderr: Container roottestkeeperavailabilityzone-gw4-zoo3-1 Stopped Stderr: Container roottestkeeperavailabilityzone-gw4-zoo3-1 Removing Stderr: Container roottestkeeperavailabilityzone-gw4-zoo1-1 Removed Stderr: Container roottestkeeperavailabilityzone-gw4-zoo3-1 Removed Stderr: Container roottestkeeperavailabilityzone-gw4-zoo2-1 Removed Stderr: Network roottestkeeperavailabilityzone-gw4_default Removing Stderr: Network roottestkeeperavailabilityzone-gw4_default Removed Cleanup called Executing query select 20 on node1 Docker networks for project roottestkeeperavailabilityzone-gw4 are NETWORK ID NAME DRIVER SCOPE Docker containers for project roottestkeeperavailabilityzone-gw4 are CONTAINER ID IMAGE COMMAND CREATED STATUS PORTS NAMES Docker volumes for project roottestkeeperavailabilityzone-gw4 are DRIVER VOLUME NAME Command:[docker container list --all --filter name='^/roottestkeeperavailabilityzone-gw4-.*-1$' --format '{{.ID}}:{{.Names}}'] Command:[docker compose --env-file /ClickHouse/tests/integration/test_insert_distributed_async_send/_instances-0-gw0/.env --project-name roottestinsertdistributedasyncsend-gw0 --file /ClickHouse/tests/integration/test_insert_distributed_async_send/_instances-0-gw0/n1/docker-compose.yml --file /ClickHouse/tests/integration/test_insert_distributed_async_send/_instances-0-gw0/n2/docker-compose.yml --file /ClickHouse/tests/integration/test_insert_distributed_async_send/_instances-0-gw0/n3/docker-compose.yml --file /ClickHouse/tests/integration/test_insert_distributed_async_send/_instances-0-gw0/n4/docker-compose.yml stop --timeout 20] [gw0] PASSED test_insert_distributed_async_send/test.py::test_insert_distributed_async_send_truncated_2[1] Unstopped containers: {} No running containers for project: roottestkeeperavailabilityzone-gw4 Trying to prune unused networks... Trying to prune unused images... Command:[docker image prune -f] Stdout:Total reclaimed space: 0B Images pruned Trying to prune unused volumes... Command:[docker volume ls | wc -l] Stdout:3 Command:[docker volume prune -f] Stdout:Total reclaimed space: 0B Volumes pruned: 3 test_keeper_memory_soft_limit/test.py::test_soft_limit_create Running tests in /ClickHouse/tests/integration/test_keeper_memory_soft_limit/test.py Cluster start called. is_up=False Docker networks for project roottestkeepermemorysoftlimit-gw4 are NETWORK ID NAME DRIVER SCOPE Docker containers for project roottestkeepermemorysoftlimit-gw4 are CONTAINER ID IMAGE COMMAND CREATED STATUS PORTS NAMES Docker volumes for project roottestkeepermemorysoftlimit-gw4 are DRIVER VOLUME NAME Cleanup called Docker networks for project roottestkeepermemorysoftlimit-gw4 are NETWORK ID NAME DRIVER SCOPE Docker containers for project roottestkeepermemorysoftlimit-gw4 are CONTAINER ID IMAGE COMMAND CREATED STATUS PORTS NAMES Docker volumes for project roottestkeepermemorysoftlimit-gw4 are DRIVER VOLUME NAME Command:[docker container list --all --filter name='^/roottestkeepermemorysoftlimit-gw4-.*-1$' --format '{{.ID}}:{{.Names}}'] Unstopped containers: {} No running containers for project: roottestkeepermemorysoftlimit-gw4 Trying to prune unused networks... Trying to prune unused images... Command:[docker image prune -f] Stdout:Total reclaimed space: 0B Images pruned Trying to prune unused volumes... Command:[docker volume ls | wc -l] Stdout:3 Command:[docker volume prune -f] Stdout:Total reclaimed space: 0B Volumes pruned: 3 Setup directory for instance: node Create directory for configuration generated in this helper Create directory for common tests configuration Copy common configuration from helpers Generate and write macros file Copy custom test config files [] to /ClickHouse/tests/integration/test_keeper_memory_soft_limit/_instances-0-gw4/node/configs/config.d Setup database dir /ClickHouse/tests/integration/test_keeper_memory_soft_limit/_instances-0-gw4/node/database Setup logs dir /ClickHouse/tests/integration/test_keeper_memory_soft_limit/_instances-0-gw4/node/logs Entrypoint cmd: bash -c "trap 'pkill tail' INT TERM; clickhouse server --config-file=/etc/clickhouse-server/config.xml --log-file=/var/log/clickhouse-server/clickhouse-server.log --errorlog-file=/var/log/clickhouse-server/clickhouse-server.err.log --daemon -- ; coproc tail -f /dev/null; wait $$!" Env {'ASAN_OPTIONS': 'use_sigaltstack=0', 'TSAN_OPTIONS': 'use_sigaltstack=0', 'CLICKHOUSE_WATCHDOG_ENABLE': '0', 'CLICKHOUSE_NATS_TLS_SECURE': '0', 'LLVM_PROFILE_FILE': '/var/lib/clickhouse/server_%h_%p_%m.profraw', 'keeper_binary': '/clickhouse', 'keeper_cmd_prefix': 'clickhouse keeper', 'image': 'altinityinfra/integration-test:8b2301119731', 'user': '0', 'keeper_fs': 'bind', 'keeper_logs_dir1': '/ClickHouse/tests/integration/test_keeper_memory_soft_limit/_instances-0-gw4/keeper1/log', 'keeper_config_dir1': '/ClickHouse/tests/integration/test_keeper_memory_soft_limit/_instances-0-gw4/keeper1/config', 'keeper_db_dir1': '/ClickHouse/tests/integration/test_keeper_memory_soft_limit/_instances-0-gw4/keeper1/coordination', 'keeper_logs_dir2': '/ClickHouse/tests/integration/test_keeper_memory_soft_limit/_instances-0-gw4/keeper2/log', 'keeper_config_dir2': '/ClickHouse/tests/integration/test_keeper_memory_soft_limit/_instances-0-gw4/keeper2/config', 'keeper_db_dir2': '/ClickHouse/tests/integration/test_keeper_memory_soft_limit/_instances-0-gw4/keeper2/coordination', 'keeper_logs_dir3': '/ClickHouse/tests/integration/test_keeper_memory_soft_limit/_instances-0-gw4/keeper3/log', 'keeper_config_dir3': '/ClickHouse/tests/integration/test_keeper_memory_soft_limit/_instances-0-gw4/keeper3/config', 'keeper_db_dir3': '/ClickHouse/tests/integration/test_keeper_memory_soft_limit/_instances-0-gw4/keeper3/coordination'} stored in /ClickHouse/tests/integration/test_keeper_memory_soft_limit/_instances-0-gw4/.env Trying paths: ['/root/.docker/config.json', '/root/.dockercfg'] No config file found Trying paths: ['/root/.docker/config.json', '/root/.dockercfg'] No config file found http://localhost:None "GET /version HTTP/1.1" 200 826 Command:[docker compose --env-file /ClickHouse/tests/integration/test_keeper_memory_soft_limit/_instances-0-gw4/.env --project-name roottestkeepermemorysoftlimit-gw4 --file /ClickHouse/tests/integration/test_keeper_memory_soft_limit/_instances-0-gw4/node/docker-compose.yml --file /ClickHouse/tests/integration/helpers/../../../tests/integration/compose/docker_compose_keeper.yml pull] Executing query select 20 on node1 Executing query select 20 on node1 Executing query select 20 on node1 Executing query select 20 on node1 Executing query select 20 on node1 Executing query select 20 on node1 Executing query select 20 on node1 Executing query select 20 on node1 Stderr: Container roottestinsertdistributedasyncsend-gw0-n3-1 Stopping Stderr: Container roottestinsertdistributedasyncsend-gw0-n4-1 Stopping Stderr: Container roottestinsertdistributedasyncsend-gw0-n1-1 Stopping Stderr: Container roottestinsertdistributedasyncsend-gw0-n2-1 Stopping Stderr: Container roottestinsertdistributedasyncsend-gw0-n2-1 Stopped Stderr: Container roottestinsertdistributedasyncsend-gw0-n4-1 Stopped Stderr: Container roottestinsertdistributedasyncsend-gw0-n1-1 Stopped Stderr: Container roottestinsertdistributedasyncsend-gw0-n3-1 Stopped Command:[bash -c [ -f /ClickHouse/tests/integration/test_insert_distributed_async_send/_instances-0-gw0/n1/logs/stderr.log ] && zgrep -aH "==================" /ClickHouse/tests/integration/test_insert_distributed_async_send/_instances-0-gw0/n1/logs/stderr.log* | ( [ -z "" ] && cat || grep -v "$" ) || true] Command:[bash -c [ -f /ClickHouse/tests/integration/test_insert_distributed_async_send/_instances-0-gw0/n2/logs/stderr.log ] && zgrep -aH "==================" /ClickHouse/tests/integration/test_insert_distributed_async_send/_instances-0-gw0/n2/logs/stderr.log* | ( [ -z "" ] && cat || grep -v "$" ) || true] Command:[bash -c [ -f /ClickHouse/tests/integration/test_insert_distributed_async_send/_instances-0-gw0/n3/logs/stderr.log ] && zgrep -aH "==================" /ClickHouse/tests/integration/test_insert_distributed_async_send/_instances-0-gw0/n3/logs/stderr.log* | ( [ -z "" ] && cat || grep -v "$" ) || true] Command:[bash -c [ -f /ClickHouse/tests/integration/test_insert_distributed_async_send/_instances-0-gw0/n4/logs/stderr.log ] && zgrep -aH "==================" /ClickHouse/tests/integration/test_insert_distributed_async_send/_instances-0-gw0/n4/logs/stderr.log* | ( [ -z "" ] && cat || grep -v "$" ) || true] Command:[docker compose --env-file /ClickHouse/tests/integration/test_insert_distributed_async_send/_instances-0-gw0/.env --project-name roottestinsertdistributedasyncsend-gw0 --file /ClickHouse/tests/integration/test_insert_distributed_async_send/_instances-0-gw0/n1/docker-compose.yml --file /ClickHouse/tests/integration/test_insert_distributed_async_send/_instances-0-gw0/n2/docker-compose.yml --file /ClickHouse/tests/integration/test_insert_distributed_async_send/_instances-0-gw0/n3/docker-compose.yml --file /ClickHouse/tests/integration/test_insert_distributed_async_send/_instances-0-gw0/n4/docker-compose.yml down --volumes] run container_id:roottestkeeperincorrectconfig-gw7-node1-1 detach:False nothrow:False cmd: ['bash', '-c', "ps ax | grep 'clickhouse' | grep -v 'grep' | grep -v 'coproc' | grep -v 'bash -c' | awk '{print $1}'"] Command:[docker exec roottestkeeperincorrectconfig-gw7-node1-1 bash -c ps ax | grep 'clickhouse' | grep -v 'grep' | grep -v 'coproc' | grep -v 'bash -c' | awk '{print $1}'] Current start attempt failed. Will kill 3364 just in case. run container_id:roottestkeeperincorrectconfig-gw7-node1-1 detach:False nothrow:True cmd: ['bash', '-c', 'kill -9 3364'] Command:[docker exec -u root roottestkeeperincorrectconfig-gw7-node1-1 bash -c kill -9 3364] Stderr:bash: line 1: kill: (3364) - No such process Exitcode:1 Stderr: Container roottestinsertdistributedasyncsend-gw0-n3-1 Stopping Stderr: Container roottestinsertdistributedasyncsend-gw0-n4-1 Stopping Stderr: Container roottestinsertdistributedasyncsend-gw0-n1-1 Stopping Stderr: Container roottestinsertdistributedasyncsend-gw0-n2-1 Stopping Stderr: Container roottestinsertdistributedasyncsend-gw0-n3-1 Stopped Stderr: Container roottestinsertdistributedasyncsend-gw0-n3-1 Removing Stderr: Container roottestinsertdistributedasyncsend-gw0-n1-1 Stopped Stderr: Container roottestinsertdistributedasyncsend-gw0-n1-1 Removing Stderr: Container roottestinsertdistributedasyncsend-gw0-n4-1 Stopped Stderr: Container roottestinsertdistributedasyncsend-gw0-n4-1 Removing Stderr: Container roottestinsertdistributedasyncsend-gw0-n2-1 Stopped Stderr: Container roottestinsertdistributedasyncsend-gw0-n2-1 Removing Stderr: Container roottestinsertdistributedasyncsend-gw0-n4-1 Removed Stderr: Container roottestinsertdistributedasyncsend-gw0-n3-1 Removed Stderr: Container roottestinsertdistributedasyncsend-gw0-n1-1 Removed Stderr: Container roottestinsertdistributedasyncsend-gw0-n2-1 Removed Stderr: Network roottestinsertdistributedasyncsend-gw0_default Removing Stderr: Network roottestinsertdistributedasyncsend-gw0_default Removed Cleanup called Docker networks for project roottestinsertdistributedasyncsend-gw0 are NETWORK ID NAME DRIVER SCOPE Docker containers for project roottestinsertdistributedasyncsend-gw0 are CONTAINER ID IMAGE COMMAND CREATED STATUS PORTS NAMES Docker volumes for project roottestinsertdistributedasyncsend-gw0 are DRIVER VOLUME NAME Command:[docker container list --all --filter name='^/roottestinsertdistributedasyncsend-gw0-.*-1$' --format '{{.ID}}:{{.Names}}'] Unstopped containers: {} No running containers for project: roottestinsertdistributedasyncsend-gw0 Trying to prune unused networks... Trying to prune unused images... Command:[docker image prune -f] Stdout:Total reclaimed space: 0B Images pruned Trying to prune unused volumes... Command:[docker volume ls | wc -l] Stdout:3 Command:[docker volume prune -f] Stdout:Total reclaimed space: 0B Volumes pruned: 3 run container_id:roottestkeeperincorrectconfig-gw7-node1-1 detach:False nothrow:False cmd: ['bash', '-c', "ps ax | grep 'clickhouse' | grep -v 'grep' | grep -v 'coproc' | grep -v 'bash -c' | awk '{print $1}'"] Command:[docker exec roottestkeeperincorrectconfig-gw7-node1-1 bash -c ps ax | grep 'clickhouse' | grep -v 'grep' | grep -v 'coproc' | grep -v 'bash -c' | awk '{print $1}'] No clickhouse process running. Start new one. http://localhost:None "POST /v1.46/containers/roottestkeeperincorrectconfig-gw7-node1-1/exec HTTP/1.1" 201 74 http://localhost:None "POST /v1.46/exec/4194b67c891f0db44c400f6bfc280d764907cc716e97c2d1e4139516e3cfbd0f/start HTTP/1.1" 200 0 http://localhost:None "GET /v1.46/exec/4194b67c891f0db44c400f6bfc280d764907cc716e97c2d1e4139516e3cfbd0f/json HTTP/1.1" 200 586 run container_id:roottestkeeperincorrectconfig-gw7-node1-1 detach:False nothrow:False cmd: ['bash', '-c', "ps ax | grep 'clickhouse' | grep -v 'grep' | grep -v 'coproc' | grep -v 'bash -c' | awk '{print $1}'"] Command:[docker exec roottestkeeperincorrectconfig-gw7-node1-1 bash -c ps ax | grep 'clickhouse' | grep -v 'grep' | grep -v 'coproc' | grep -v 'bash -c' | awk '{print $1}'] Stdout:4001 Clickhouse process running. run container_id:roottestkeeperincorrectconfig-gw7-node1-1 detach:False nothrow:False cmd: ['bash', '-c', "ps ax | grep 'clickhouse' | grep -v 'grep' | grep -v 'coproc' | grep -v 'bash -c' | awk '{print $1}'"] Command:[docker exec roottestkeeperincorrectconfig-gw7-node1-1 bash -c ps ax | grep 'clickhouse' | grep -v 'grep' | grep -v 'coproc' | grep -v 'bash -c' | awk '{print $1}'] Stdout:4001 Executing query select 20 on node1 Executing query select 20 on node1 Executing query select 20 on node1 Executing query select 20 on node1 Executing query select 20 on node1 Executing query select 20 on node1 Stderr: zoo3 Skipped - Image is already being pulled by zoo2 Stderr: zoo1 Skipped - Image is already being pulled by zoo2 Stderr: node Skipped - Image is already being pulled by zoo2 Stderr: zoo2 Pulling Stderr: zoo2 Pulled Setup ZooKeeper Creating internal ZooKeeper dirs: ['/ClickHouse/tests/integration/test_keeper_memory_soft_limit/_instances-0-gw4/keeper1/log', '/ClickHouse/tests/integration/test_keeper_memory_soft_limit/_instances-0-gw4/keeper1/config', '/ClickHouse/tests/integration/test_keeper_memory_soft_limit/_instances-0-gw4/keeper1/coordination', '/ClickHouse/tests/integration/test_keeper_memory_soft_limit/_instances-0-gw4/keeper2/log', '/ClickHouse/tests/integration/test_keeper_memory_soft_limit/_instances-0-gw4/keeper2/config', '/ClickHouse/tests/integration/test_keeper_memory_soft_limit/_instances-0-gw4/keeper2/coordination', '/ClickHouse/tests/integration/test_keeper_memory_soft_limit/_instances-0-gw4/keeper3/log', '/ClickHouse/tests/integration/test_keeper_memory_soft_limit/_instances-0-gw4/keeper3/config', '/ClickHouse/tests/integration/test_keeper_memory_soft_limit/_instances-0-gw4/keeper3/coordination'] Command:[docker compose --project-name roottestkeepermemorysoftlimit-gw4 --env-file /ClickHouse/tests/integration/test_keeper_memory_soft_limit/_instances-0-gw4/.env --file /ClickHouse/tests/integration/helpers/../../../tests/integration/compose/docker_compose_keeper.yml --verbose up -d] Stderr:time="2025-04-02T02:28:24Z" level=trace msg="Docker Desktop integration not enabled" Stderr: Network roottestkeepermemorysoftlimit-gw4_default Creating Stderr: Network roottestkeepermemorysoftlimit-gw4_default Created Stderr: Container roottestkeepermemorysoftlimit-gw4-zoo3-1 Creating Stderr: Container roottestkeepermemorysoftlimit-gw4-zoo1-1 Creating Stderr: Container roottestkeepermemorysoftlimit-gw4-zoo2-1 Creating Stderr: Container roottestkeepermemorysoftlimit-gw4-zoo2-1 Created Stderr: Container roottestkeepermemorysoftlimit-gw4-zoo1-1 Created Stderr: Container roottestkeepermemorysoftlimit-gw4-zoo3-1 Created Stderr: Container roottestkeepermemorysoftlimit-gw4-zoo2-1 Starting Stderr: Container roottestkeepermemorysoftlimit-gw4-zoo3-1 Starting Stderr: Container roottestkeepermemorysoftlimit-gw4-zoo1-1 Starting Stderr: Container roottestkeepermemorysoftlimit-gw4-zoo2-1 Started Stderr: Container roottestkeepermemorysoftlimit-gw4-zoo3-1 Started Stderr: Container roottestkeepermemorysoftlimit-gw4-zoo1-1 Started Stderr:time="2025-04-02T02:28:25Z" level=debug msg="otel error" error="" Stderr:time="2025-04-02T02:28:25Z" level=debug msg="otel error" error="" Wait ZooKeeper to start get_instance_ip instance_name=zoo1 http://localhost:None "GET /v1.46/containers/roottestkeepermemorysoftlimit-gw4-zoo1-1/json HTTP/1.1" 200 None get_kazoo_client: zoo1, ip:172.16.2.4, port:2181, use_ssl:False Connecting to 172.16.2.4(172.16.2.4):2181, use_ssl: False Connection dropped: socket connection error: Connection refused Executing query select 20 on node1 Connecting to 172.16.2.4(172.16.2.4):2181, use_ssl: False Connection dropped: socket connection error: Connection refused Connecting to 172.16.2.4(172.16.2.4):2181, use_ssl: False Connection dropped: socket connection error: Connection refused Executing query select 20 on node1 Connecting to 172.16.2.4(172.16.2.4):2181, use_ssl: False Connection dropped: socket connection error: Connection refused Executing query select 20 on node1 Executing query select 20 on node1 Connecting to 172.16.2.4(172.16.2.4):2181, use_ssl: False Connection dropped: socket connection error: Connection refused Stderr: Container roottestencrypteddisk-gw1-resolver-1 Stopping Stderr: Container roottestencrypteddisk-gw1-node-1 Stopping Stderr: Container roottestencrypteddisk-gw1-node-1 Stopped Stderr: Container roottestencrypteddisk-gw1-minio1-1 Stopping Stderr: Container roottestencrypteddisk-gw1-minio1-1 Stopped Stderr: Container roottestencrypteddisk-gw1-resolver-1 Stopped Stderr: Container roottestencrypteddisk-gw1-proxy1-1 Stopping Stderr: Container roottestencrypteddisk-gw1-proxy2-1 Stopping Stderr: Container roottestencrypteddisk-gw1-proxy1-1 Stopped Stderr: Container roottestencrypteddisk-gw1-proxy2-1 Stopped Command:[bash -c [ -f /ClickHouse/tests/integration/test_encrypted_disk/_instances-0-gw1/node/logs/stderr.log ] && zgrep -aH "==================" /ClickHouse/tests/integration/test_encrypted_disk/_instances-0-gw1/node/logs/stderr.log* | ( [ -z "" ] && cat || grep -v "$" ) || true] Command:[docker compose --env-file /ClickHouse/tests/integration/test_encrypted_disk/_instances-0-gw1/.env --project-name roottestencrypteddisk-gw1 --file /ClickHouse/tests/integration/test_encrypted_disk/_instances-0-gw1/node/docker-compose.yml --file /ClickHouse/tests/integration/helpers/../../../tests/integration/compose/docker_compose_minio.yml down --volumes] run container_id:roottestkeeperincorrectconfig-gw7-node1-1 detach:False nothrow:False cmd: ['bash', '-c', "ps ax | grep 'clickhouse' | grep -v 'grep' | grep -v 'coproc' | grep -v 'bash -c' | awk '{print $1}'"] Command:[docker exec roottestkeeperincorrectconfig-gw7-node1-1 bash -c ps ax | grep 'clickhouse' | grep -v 'grep' | grep -v 'coproc' | grep -v 'bash -c' | awk '{print $1}'] Current start attempt failed. Will kill 4001 just in case. run container_id:roottestkeeperincorrectconfig-gw7-node1-1 detach:False nothrow:True cmd: ['bash', '-c', 'kill -9 4001'] Command:[docker exec -u root roottestkeeperincorrectconfig-gw7-node1-1 bash -c kill -9 4001] Stderr: Container roottestencrypteddisk-gw1-resolver-1 Stopping Stderr: Container roottestencrypteddisk-gw1-node-1 Stopping Stderr: Container roottestencrypteddisk-gw1-resolver-1 Stopped Stderr: Container roottestencrypteddisk-gw1-resolver-1 Removing Stderr: Container roottestencrypteddisk-gw1-node-1 Stopped Stderr: Container roottestencrypteddisk-gw1-node-1 Removing Stderr: Container roottestencrypteddisk-gw1-node-1 Removed Stderr: Container roottestencrypteddisk-gw1-minio1-1 Stopping Stderr: Container roottestencrypteddisk-gw1-minio1-1 Stopped Stderr: Container roottestencrypteddisk-gw1-minio1-1 Removing Stderr: Container roottestencrypteddisk-gw1-resolver-1 Removed Stderr: Container roottestencrypteddisk-gw1-minio1-1 Removed Stderr: Container roottestencrypteddisk-gw1-proxy1-1 Stopping Stderr: Container roottestencrypteddisk-gw1-proxy2-1 Stopping Stderr: Container roottestencrypteddisk-gw1-proxy1-1 Stopped Stderr: Container roottestencrypteddisk-gw1-proxy1-1 Removing Stderr: Container roottestencrypteddisk-gw1-proxy2-1 Stopped Stderr: Container roottestencrypteddisk-gw1-proxy2-1 Removing Stderr: Container roottestencrypteddisk-gw1-proxy1-1 Removed Stderr: Container roottestencrypteddisk-gw1-proxy2-1 Removed Stderr: Volume roottestencrypteddisk-gw1_data1-1 Removing Stderr: Network roottestencrypteddisk-gw1_default Removing Stderr: Volume roottestencrypteddisk-gw1_data1-1 Removed Stderr: Network roottestencrypteddisk-gw1_default Removed Cleanup called Stderr:bash: line 1: kill: (4001) - No such process Docker networks for project roottestencrypteddisk-gw1 are NETWORK ID NAME DRIVER SCOPE Exitcode:1 Docker containers for project roottestencrypteddisk-gw1 are CONTAINER ID IMAGE COMMAND CREATED STATUS PORTS NAMES Docker volumes for project roottestencrypteddisk-gw1 are DRIVER VOLUME NAME Command:[docker container list --all --filter name='^/roottestencrypteddisk-gw1-.*-1$' --format '{{.ID}}:{{.Names}}'] Unstopped containers: {} No running containers for project: roottestencrypteddisk-gw1 Trying to prune unused networks... Trying to prune unused images... Command:[docker image prune -f] Stdout:Total reclaimed space: 0B Images pruned Trying to prune unused volumes... Command:[docker volume ls | wc -l] Stdout:1 Volumes pruned: 1 run container_id:roottestkeeperincorrectconfig-gw7-node1-1 detach:False nothrow:False cmd: ['bash', '-c', "echo '\n\n \n 9181\n 1\n /var/lib/clickhouse/coordination/log\n /var/lib/clickhouse/coordination/snapshots\n\n \n 5000\n 10000\n trace\n \n\n true\n \n \n 1\n 127.0.0.1\n 9234\n \n \n 2\n 127.0.1.1\n 9234\n \n \n 3\n 127.0.0.2\n 9234\n \n \n \n\n' > /etc/clickhouse-server/config.d/enable_keeper1.xml"] Command:[docker exec roottestkeeperincorrectconfig-gw7-node1-1 bash -c echo ' 9181 1 /var/lib/clickhouse/coordination/log /var/lib/clickhouse/coordination/snapshots 5000 10000 trace true 1 127.0.0.1 9234 2 127.0.1.1 9234 3 127.0.0.2 9234 ' > /etc/clickhouse-server/config.d/enable_keeper1.xml] run container_id:roottestkeeperincorrectconfig-gw7-node1-1 detach:False nothrow:False cmd: ['bash', '-c', "ps ax | grep 'clickhouse' | grep -v 'grep' | grep -v 'coproc' | grep -v 'bash -c' | awk '{print $1}'"] Command:[docker exec roottestkeeperincorrectconfig-gw7-node1-1 bash -c ps ax | grep 'clickhouse' | grep -v 'grep' | grep -v 'coproc' | grep -v 'bash -c' | awk '{print $1}'] No clickhouse process running. Start new one. http://localhost:None "POST /v1.46/containers/roottestkeeperincorrectconfig-gw7-node1-1/exec HTTP/1.1" 201 74 http://localhost:None "POST /v1.46/exec/fa49764f77520cb6d6e4cf8e06d60e9f522ffd8d9599d29d0e26a36bbf89eb96/start HTTP/1.1" 200 0 http://localhost:None "GET /v1.46/exec/fa49764f77520cb6d6e4cf8e06d60e9f522ffd8d9599d29d0e26a36bbf89eb96/json HTTP/1.1" 200 586 Connection dropped: socket connection error: None run container_id:roottestkeeperincorrectconfig-gw7-node1-1 detach:False nothrow:False cmd: ['bash', '-c', "ps ax | grep 'clickhouse' | grep -v 'grep' | grep -v 'coproc' | grep -v 'bash -c' | awk '{print $1}'"] Command:[docker exec roottestkeeperincorrectconfig-gw7-node1-1 bash -c ps ax | grep 'clickhouse' | grep -v 'grep' | grep -v 'coproc' | grep -v 'bash -c' | awk '{print $1}'] Stdout:4644 Clickhouse process running. run container_id:roottestkeeperincorrectconfig-gw7-node1-1 detach:False nothrow:False cmd: ['bash', '-c', "ps ax | grep 'clickhouse' | grep -v 'grep' | grep -v 'coproc' | grep -v 'bash -c' | awk '{print $1}'"] Command:[docker exec roottestkeeperincorrectconfig-gw7-node1-1 bash -c ps ax | grep 'clickhouse' | grep -v 'grep' | grep -v 'coproc' | grep -v 'bash -c' | awk '{print $1}'] Stdout:4644 Executing query select 20 on node1 Connecting to 172.16.2.4(172.16.2.4):2181, use_ssl: False Sending request(xid=None): Connect(protocol_version=0, last_zxid_seen=0, time_out=30000, session_id=0, passwd=b'\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00', read_only=None) Zookeeper connection established, state: CONNECTED Sending request(xid=1): GetChildren(path='/', watcher=None) Received response(xid=1): ['keeper'] Sending request(xid=2): Close() Connection dropped: socket connection broken Transition to CONNECTING Zookeeper connection lost Executing query select 20 on node1 Failed connecting to Zookeeper within the connection retry policy. Zookeeper session closed, state: CLOSED get_instance_ip instance_name=zoo2 http://localhost:None "GET /v1.46/containers/roottestkeepermemorysoftlimit-gw4-zoo2-1/json HTTP/1.1" 200 None get_kazoo_client: zoo2, ip:172.16.2.2, port:2181, use_ssl:False Connecting to 172.16.2.2(172.16.2.2):2181, use_ssl: False Sending request(xid=None): Connect(protocol_version=0, last_zxid_seen=0, time_out=30000, session_id=0, passwd=b'\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00', read_only=None) Zookeeper connection established, state: CONNECTED Sending request(xid=1): GetChildren(path='/', watcher=None) Received response(xid=1): ['keeper'] Sending request(xid=2): Close() Connection dropped: socket connection broken Transition to CONNECTING Zookeeper connection lost Failed connecting to Zookeeper within the connection retry policy. Zookeeper session closed, state: CLOSED get_instance_ip instance_name=zoo3 http://localhost:None "GET /v1.46/containers/roottestkeepermemorysoftlimit-gw4-zoo3-1/json HTTP/1.1" 200 None get_kazoo_client: zoo3, ip:172.16.2.3, port:2181, use_ssl:False Connecting to 172.16.2.3(172.16.2.3):2181, use_ssl: False Sending request(xid=None): Connect(protocol_version=0, last_zxid_seen=0, time_out=30000, session_id=0, passwd=b'\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00', read_only=None) Zookeeper connection established, state: CONNECTED Sending request(xid=1): GetChildren(path='/', watcher=None) Received response(xid=1): ['keeper'] Sending request(xid=2): Close() Connection dropped: socket connection broken Transition to CONNECTING Zookeeper connection lost Failed connecting to Zookeeper within the connection retry policy. Zookeeper session closed, state: CLOSED All instances of ZooKeeper started: ('zoo1', 'zoo2', 'zoo3') ('Trying to create ClickHouse instance by command %s', 'docker compose --env-file /ClickHouse/tests/integration/test_keeper_memory_soft_limit/_instances-0-gw4/.env --project-name roottestkeepermemorysoftlimit-gw4 --file /ClickHouse/tests/integration/test_keeper_memory_soft_limit/_instances-0-gw4/node/docker-compose.yml --file /ClickHouse/tests/integration/helpers/../../../tests/integration/compose/docker_compose_keeper.yml up -d --no-recreate') Command:[docker compose --env-file /ClickHouse/tests/integration/test_keeper_memory_soft_limit/_instances-0-gw4/.env --project-name roottestkeepermemorysoftlimit-gw4 --file /ClickHouse/tests/integration/test_keeper_memory_soft_limit/_instances-0-gw4/node/docker-compose.yml --file /ClickHouse/tests/integration/helpers/../../../tests/integration/compose/docker_compose_keeper.yml up -d --no-recreate] Stderr: Container roottestkeepermemorysoftlimit-gw4-zoo2-1 Running Stderr: Container roottestkeepermemorysoftlimit-gw4-zoo1-1 Running Stderr: Container roottestkeepermemorysoftlimit-gw4-zoo3-1 Running Stderr: Container roottestkeepermemorysoftlimit-gw4-node-1 Creating Stderr: Container roottestkeepermemorysoftlimit-gw4-node-1 Created Stderr: Container roottestkeepermemorysoftlimit-gw4-node-1 Starting Stderr: Container roottestkeepermemorysoftlimit-gw4-node-1 Started ClickHouse instance created get_instance_ip instance_name=node http://localhost:None "GET /v1.46/containers/roottestkeepermemorysoftlimit-gw4-node-1/json HTTP/1.1" 200 None get_instance_ip instance_name=node http://localhost:None "GET /v1.46/containers/roottestkeepermemorysoftlimit-gw4-node-1/json HTTP/1.1" 200 None Waiting for ClickHouse start in node, ip: 172.16.2.5... http://localhost:None "GET /v1.46/containers/roottestkeepermemorysoftlimit-gw4-node-1/json HTTP/1.1" 200 None http://localhost:None "GET /v1.46/containers/9d4ba6fd5b0a2b87e9e21ec6ce3953a0b095772adabf17be012e4f8563354d2b/json HTTP/1.1" 200 None http://localhost:None "GET /v1.46/containers/9d4ba6fd5b0a2b87e9e21ec6ce3953a0b095772adabf17be012e4f8563354d2b/json HTTP/1.1" 200 None Executing query select 20 on node1 http://localhost:None "GET /v1.46/containers/9d4ba6fd5b0a2b87e9e21ec6ce3953a0b095772adabf17be012e4f8563354d2b/json HTTP/1.1" 200 None http://localhost:None "GET /v1.46/containers/9d4ba6fd5b0a2b87e9e21ec6ce3953a0b095772adabf17be012e4f8563354d2b/json HTTP/1.1" 200 None http://localhost:None "GET /v1.46/containers/9d4ba6fd5b0a2b87e9e21ec6ce3953a0b095772adabf17be012e4f8563354d2b/json HTTP/1.1" 200 None http://localhost:None "GET /v1.46/containers/9d4ba6fd5b0a2b87e9e21ec6ce3953a0b095772adabf17be012e4f8563354d2b/json HTTP/1.1" 200 None http://localhost:None "GET /v1.46/containers/9d4ba6fd5b0a2b87e9e21ec6ce3953a0b095772adabf17be012e4f8563354d2b/json HTTP/1.1" 200 None http://localhost:None "GET /v1.46/containers/9d4ba6fd5b0a2b87e9e21ec6ce3953a0b095772adabf17be012e4f8563354d2b/json HTTP/1.1" 200 None http://localhost:None "GET /v1.46/containers/9d4ba6fd5b0a2b87e9e21ec6ce3953a0b095772adabf17be012e4f8563354d2b/json HTTP/1.1" 200 None Executing query select 20 on node1 http://localhost:None "GET /v1.46/containers/9d4ba6fd5b0a2b87e9e21ec6ce3953a0b095772adabf17be012e4f8563354d2b/json HTTP/1.1" 200 None http://localhost:None "GET /v1.46/containers/9d4ba6fd5b0a2b87e9e21ec6ce3953a0b095772adabf17be012e4f8563354d2b/json HTTP/1.1" 200 None http://localhost:None "GET /v1.46/containers/9d4ba6fd5b0a2b87e9e21ec6ce3953a0b095772adabf17be012e4f8563354d2b/json HTTP/1.1" 200 None http://localhost:None "GET /v1.46/containers/9d4ba6fd5b0a2b87e9e21ec6ce3953a0b095772adabf17be012e4f8563354d2b/json HTTP/1.1" 200 None http://localhost:None "GET /v1.46/containers/9d4ba6fd5b0a2b87e9e21ec6ce3953a0b095772adabf17be012e4f8563354d2b/json HTTP/1.1" 200 None ClickHouse node started Wait ZooKeeper to start get_instance_ip instance_name=zoo1 http://localhost:None "GET /v1.46/containers/roottestkeepermemorysoftlimit-gw4-zoo1-1/json HTTP/1.1" 200 None get_kazoo_client: zoo1, ip:172.16.2.4, port:2181, use_ssl:False Connecting to 172.16.2.4(172.16.2.4):2181, use_ssl: False Sending request(xid=None): Connect(protocol_version=0, last_zxid_seen=0, time_out=30000, session_id=0, passwd=b'\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00', read_only=None) Zookeeper connection established, state: CONNECTED Sending request(xid=1): GetChildren(path='/', watcher=None) Received response(xid=1): ['keeper'] Sending request(xid=2): Close() Connection dropped: socket connection broken Transition to CONNECTING Zookeeper connection lost Failed connecting to Zookeeper within the connection retry policy. Zookeeper session closed, state: CLOSED get_instance_ip instance_name=zoo2 http://localhost:None "GET /v1.46/containers/roottestkeepermemorysoftlimit-gw4-zoo2-1/json HTTP/1.1" 200 None get_kazoo_client: zoo2, ip:172.16.2.2, port:2181, use_ssl:False Connecting to 172.16.2.2(172.16.2.2):2181, use_ssl: False Sending request(xid=None): Connect(protocol_version=0, last_zxid_seen=0, time_out=30000, session_id=0, passwd=b'\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00', read_only=None) Zookeeper connection established, state: CONNECTED Sending request(xid=1): GetChildren(path='/', watcher=None) Received response(xid=1): ['clickhouse', 'keeper'] Sending request(xid=2): Close() Connection dropped: socket connection broken Transition to CONNECTING Zookeeper connection lost Executing query select 20 on node1 Failed connecting to Zookeeper within the connection retry policy. Zookeeper session closed, state: CLOSED get_instance_ip instance_name=zoo3 http://localhost:None "GET /v1.46/containers/roottestkeepermemorysoftlimit-gw4-zoo3-1/json HTTP/1.1" 200 None get_kazoo_client: zoo3, ip:172.16.2.3, port:2181, use_ssl:False Connecting to 172.16.2.3(172.16.2.3):2181, use_ssl: False Sending request(xid=None): Connect(protocol_version=0, last_zxid_seen=0, time_out=30000, session_id=0, passwd=b'\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00', read_only=None) Zookeeper connection established, state: CONNECTED Sending request(xid=1): GetChildren(path='/', watcher=None) Received response(xid=1): ['clickhouse', 'keeper'] Sending request(xid=2): Close() Connection dropped: socket connection broken Transition to CONNECTING Zookeeper connection lost Failed connecting to Zookeeper within the connection retry policy. Zookeeper session closed, state: CLOSED All instances of ZooKeeper started: ('zoo1', 'zoo2', 'zoo3') get_instance_ip instance_name=zoo1 http://localhost:None "GET /v1.46/containers/roottestkeepermemorysoftlimit-gw4-zoo1-1/json HTTP/1.1" 200 None Connecting to 172.16.2.4(172.16.2.4):2181, use_ssl: False Sending request(xid=None): Connect(protocol_version=0, last_zxid_seen=0, time_out=30000, session_id=0, passwd=b'\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00', read_only=None) Zookeeper connection established, state: CONNECTED Sending request(xid=1): Create(path='/test_soft_limit', data=b'abc', acl=[ACL(perms=31, acl_list=['ALL'], id=Id(scheme='world', id='anyone'))], flags=0) Received response(xid=1): '/test_soft_limit' Sending request(xid=2): Create(path='/test_soft_limit/node_0', data=b'a79nga4clsq0glc857z8inai7baexgusi69ixnc0a6otzklhfwfmy50sfdt2zzbsylgh7t4yclug0jsv6gct3zs72fp85xh0mkd2zu324zcrcxtaxda8t3sb8kn216pckhbezg8zn5k2t817fvszg0h6jvdcqaowuh6ssdrgrfn15xs9cmhzgdax0u7iuw43jo7xne9kiisftzg44rfzo1zqdujkli2a5ub7dv9nrpobwg46n9yv1foparkoqsrph7poyvorhicrnwyf74mh675yh9bxldtl2kjzy8w8nb3bzfk5aujcrihlj95sd5jll8xofyxhou7fkrrv29a9xhyte0ubmvfty0dchwnyw7gioae2meuwftxyxxphoa9s834nvah29v9dzhghfqvtnwqr1vnikf7aj1e0jpumr53tg6t3yivm8580m8n8d2pqdgs0pg9furcbye3btu2hxhsf7ijbtzeaaojkpb1axgkotts91417cj8on4a1nthc3feoatq4ojqmf3b63dhcc1rzi6xjelsi5f72vtjgig02gqevqm9ry1aq7wri4ih6wfkcfqo8c6ntf0zdehb9fom86cz5xuyp7jbnm285issgeui0vjeou7aye4sk2ydc627qxfjqyjdfxok4wr4ifl5vcbztrx3e9k1rzgybkpunoewvj0vjbfq62o55jw9gl1edt3bd6ek4zlcrpnimaibii3lu2h1sufj30afxk1pwq7tqonhwrtte0ztil82nq8f57z3nu79yaz12prx8zqjs78fc3eiylmmpy395fst857mtdcjdcw6n6jrfxygxqu7yczglnhal359daz5i3c4ljx4za0k97vsehrm1t5whd50p2v0rt0i82s18qdpyh5qzlspk7ni5qdysvwgoocvz2rcj5e6s2wayz2x9x2f6effooy0c8vou8s2vi0gn1wliuxhjvol2cxia8w6ogz6v3fakdx3cetq739v3', acl=[ACL(perms=31, acl_list=['ALL'], id=Id(scheme='world', id='anyone'))], flags=0) Received response(xid=2): '/test_soft_limit/node_0' Sending request(xid=3): Create(path='/test_soft_limit/node_1', data=b'qglkhmfoz2azm2bcn6iefg0k6bvxb5297xtez7vkwr77qbr03hrppl9on96mbcc43vi5djb7mw86xwgjl4ih16cmvsr838vq7ihccek7fzwz5e5sk09vieimxwr0lsohuho1hpajghsiom6razxo4zm3z899d949tmdv2m3jngmbtc63g3pkfrs3sckap1w4usixvt2y4s5zr6h3qxskuy9txnrzjpev4awv62a8gnrk2qvdqom3er7z69jwks6moazap2pa8bxl15phtztkyr0s0f34x7zgno6omtihpezchfcqqrykt5wu4fhgxs8nhso26qsuugpzsqsmw9gn04wrqhubmcxng77hcq752msvtxm2t6shlsu3gbtrk07s9872xxpsy4yy0eip8zds6u5vljsc4fynez6ftcmsfdc80qlk6a9u14gyrhisxtkixh9dpd9qxaaezniya8jc629jpwkyhe4phe6q6v6142p2jshl2bv6ghpb6vsdx4jn028ew1g50odufe672iawsfqxud7bf0suwgnicrz7dwhfev3iy8yidfcjn58oiuzqtnzo6fbmrfntl5v5ajap6043mfyhmjbrgr8shoht5780etncxoiksnihi3t8rbzmo6xnefjpunatbwflau29fyy1xi7y6odujjb5vcwko6gazglvyz221dp0lrjq3afjtp5qqrf7d03p9oh38yakifvwp6u67i2b928zjru6nm6wdt386k3vlbdykjunlyto5rau18g10xz5yqgz81k89sp622ntxl159804o0xnv4dqx5m3lqkty4tu0axs11us5wzghq8tk3hobwjzy61zqeazeiy9xgb9tl40w2n1x5cffdflwltgpbydrg47ybv771ckged322fwa84pxdsm1owuqaacrlvbbfoxfoyd5on6j0szpc401bj25x47nrwmo92byufcgz54p86p13txs405pimlhzpq75qlr48g', acl=[ACL(perms=31, acl_list=['ALL'], id=Id(scheme='world', id='anyone'))], flags=0) Received response(xid=3): '/test_soft_limit/node_1' Sending request(xid=4): Create(path='/test_soft_limit/node_2', data=b'9cm4cj8556s0q8oxhgoebi93a1lz7afjmlmpyh4pq8u75mva3y6b9yb0zyu1392w3lhfjzm7m8guqi04u8ysgbss8z6sssdj6m20n6joych6c44l7ol1kbaqv0z1mgvdzqv6m3dymwuopszvml7f4nd5zi5dzbss1utpyzbx28ilpzxovnsl3blj5y4io9j7x1pi8n0sayxgo6132v3fvlf56l8gvcrnxhqjn4ftbiia8zj2dw1on2242dxwje82qgryfyrakc45g0ybqjgmutybwipdhuumggo5pbhrtqkzgqe5wizmenwba3b9vp6f7l2korkxixzqit1w0ladmwyglis50d75wj9gq7ipomktkho2ojiwi3v3avgnbdjq8ox0fspizjiir10etuymgco2w1av25id3t2bvzdwvply35au9jxo7bsrmzz6nmo6bcg77klq55pg0ljpt1h34hv25r66dey2ue3790eanojgraods1jve5bcn8sicd79kh7s0rs5fsdp7eqogu3nbc27uzxx3abzmehnpsxbj9p7i3bhiyslq2wjafmrh9zutntqwpl5t69en6w0pmzzb4oi9bsfyo1jvpi9pb4cii7je8fl98es07o9tmj7wwoielvfadppkuhtf203gwushy9v9z2g850b2ir3gm9gtynlgylcht32i0zj34a9x5pu6mh8b92zotxhi8gys6v0wr1dltcn5d8kasqlfr3idwe6wxgt807qqq6tri6ap7dqv4uevimcdrg4y51nkmkidm79u5vuqh85qc16ya9r5ssxzopj1j4e6s7jdedplcua7dmuxgb19cyxbzuw7dxgwnys7s6rr7053r9imnbvp9kg7as5hb2vv9g47rsqh2qshue3746qlpnuamb4txetx5jehkt9v64atjd6uk1r3hv7qp351zlqdpf5s0f7a0fo5uyx9nna72j1k9jt7c0zlgewbcekhhsx08cmgwp7', acl=[ACL(perms=31, acl_list=['ALL'], id=Id(scheme='world', id='anyone'))], flags=0) Received response(xid=4): '/test_soft_limit/node_2' Sending request(xid=5): Create(path='/test_soft_limit/node_3', data=b'l9alno1vi7842s1zpk4npiudbv86xa2vwkhigpj9172ymr4cu5hd91hqdngst4thhggqtv0mguk3wfn740gf73yzfk2hfvswg3homnu7creugwh8nfoaeoqzkqzk8crx3g7s3zpknk0rij7eepw9z024ldxlzjv8qa38mxaywb8p9e9se58zospn8wawo9mrqv0z16sqj1r72ruezuw6yy3h94onkt8iouyunm0bzlpgvgov7r8fkfxshsel7vgxjpp2lswccllsbnidftob62luu04pe9ttob0en420lzyrt840shu46269agvhr6lxqv4kco4d7uc56nh85g3z8d5thjiii7pxfe5bv9ixvleflkrv7s1pskfzvv3evy1iidochhdpxu4qq7ilggrjup3oi88ai3scex3arwqhnr1tfed0uo2lf4kgz8od8mbqxvo3n9opr269odrehz623golb6cjfqt7dex9s9dsozu5d59i1d7owieyip7ik8xessj9ffhguzs9sf8h42d2v5j6b815lcpuzzkwka5v5arzyyd1bljnf1fuwk3e1gxr352gnw2gpv2nts6jrbg2j8noyszixn9rlw6hng9ydz6pb9blk7b6si55botkjglxrea2f67kdlghv3hr94dknfckbybh4aiy9jca9r8lbyn0cqkp7qazdp88yhdxz27pospgp6p7kjdvou01mhp3ryrlmtxv89jlw99pam25fcb5s2jg1trl7swqwc5vz1p4yac2rvuh2lwrxan957xmq0t7yynbg0owj7ckf9jy8y1zqpdykv25xvocpm7engi4qh3g47lor9ce994wbfmiif3l6g2f2t5orjzgchxkhy44hebsrectvooi0j083qi49sh9c63d5xxfy29hpgtmog1fwaw70xvhpz8ywwrob74r7n5tckbs79s8sfua1e02vbujatfgwvp332aelecsk170ejq5416alq3kpmns', acl=[ACL(perms=31, acl_list=['ALL'], id=Id(scheme='world', id='anyone'))], flags=0) Received response(xid=5): '/test_soft_limit/node_3' Sending request(xid=6): Create(path='/test_soft_limit/node_4', data=b'd2nh1ck2noomunawl08zr8h4yfhz1e7hy3fa3xmi8jrmjjisqmt8p9zzw31gez6k9vpn6wir8obejralnl86cn2yag8e2am2nwb1nymjp594m3wsig1ejsropi18arqho60f5b821cs46n4yjrknddatsfl3zptp2l4ig94wxc63bnd0nuh2nl64mn4h1uaib4e8pf38ozc501x623cjyetczxxkhg1qk0cz9168uyuk3xaqb3ihm4rrd59a59pybg5njnog3kt0wcnc8qe8cwnxtx0wb7siq5l63wouc878btjxs7j6dx09pq8id31h3dyy2wd1h1o5c49ir69nvgserw8ahcmvbyplhpupl5z7rf9u1e9ylrv978zql9372nbx2i1aqeq8arxgnk4vzeq8d6ikkmv6e5eb5gk75gv8gyjp44ba71oo3fzst6ktw733vr0mzxgapnmo87el5zqlbtp7pplfj8e2u6yagecqddxwphqonfxi1rvn0drycpephvt5tlqzv5fdcii9ti39rof1txy05whcyg8lp3ahxyhkb3w8p3j0nmpii4by4bohyt4lgulmesra7dcerocoplfo959zkb3zxjf5m35vd7xpfqd0q5p7uar4clm7oh3933ymwsak5wqyitlqfjod0ec55sl2h7mrjhhd69nj4xkq1ym0uusquzbezf2han7ngeolg3ccgwyvfq6nuxst99h7j8dwc2zy1qslwcm1umzru1rhd7aog14otd80av7mma8z6ac3ydhzbqhi3qgmwkezbebadcdn9hxw7pozvuotboyj6dffovhkkdvc2s3wb4om2rbnlqabwwaesf9pdivcco6qsne4s3kopfg4cljl0c5iorjjjymxn4mol912zc1hb56ht2sw282hjsbvrry8okcs2vlhwpm9v94osrgvaubqdha2g3qdaf4qn50wqbiirw78i9vl9wabcrv6e48w5zo86tajr1js', acl=[ACL(perms=31, acl_list=['ALL'], id=Id(scheme='world', id='anyone'))], flags=0) Received response(xid=6): '/test_soft_limit/node_4' Sending request(xid=7): Create(path='/test_soft_limit/node_5', data=b'y6wybb0j8qrqhcit5jmhk46c95qpiczai8miaoo8evjmue28it0b6qvb4nrg9019ffgrz7wlvdlc9yt24gj14ofqkgy58v6mu1tq9mrj07q0u7nw2aoxdrrz9flysnqher433dppoh60725tsqp7dkdsi4dhpwh88w64uizbksiqzgc39tit0iggowom93za2tqpr889jl1uqk4k3qu16a7krlrws2ydromckwiigjtxf91i3fzfd8g8u5s5u9j7il4bchzgi0wqxfrgrvy7eiih7b5ubbiixn6x31y6hs2mo98vsvjl5islfk0ewiey8aaeigb233jr1g4u4hh5rjf4kyczvfibaytb75egjad0ogvo97nszyc65cvezlzp8240tjvvhtxemin5qhza3s9hq0986z7qnxcyust9o4f59am91xmer6ph2x5ylxza9kqntpqbpckim0z9pqhjgx2x95qjctvwyhofj1f13hkelylpbmm3hkfv4xf8sdd8suvn3a4wn22rvxm39xgrrnv077fl1isow72047bkzsuef6ny9hi7hve9lt9iti2qzxgzhif8s5z692yu2qrrib0e24jex2c39b4kepp54shmi6tlruss9j2s588nb3p4krz8d4nr8uhlcfc292ullvmcl04vfxebe1quunpx1c9gvt6sj6io4kw0cjpx39umqmp3i4udwal6me93tr5k0m8worwlmqzhmt064fmptvqfvbvaqgcv90lc5y0xkmqgxha5gjisee5v4sflsk95r8ua5g88ermqhsiv1nhup0zyvr4lb9hohmtmzow7hkfmbcaous0ovj1tq81r4itcusa763wz28odjg5tjo1iddj6pbpcc14fa13wtdtjmivvafnyzgejxmsqdcul71lzak4zqzfxrjia479n0lw0mek34s8wrfnisjm5sriexxqrmvbuej3mqsshdedvbusfvczupfwl576znikcfozm', acl=[ACL(perms=31, acl_list=['ALL'], id=Id(scheme='world', id='anyone'))], flags=0) Received response(xid=7): '/test_soft_limit/node_5' Sending request(xid=8): Create(path='/test_soft_limit/node_6', data=b'9le69rj03kfmkyz0ehlpihj34vmkr2x3xk9txsbxa38cz2ubog5mb1dxjuzjbkn785qpppkyity7pik1jpdl3f24u2f8ku2oi8um4i7ay5buxssfdc2pwomgd785lfm8lt1h7zuvu0pyodey7b1vj3unduja1nxymeqoeaz96k6qv7gfy92zfcosqytjy1s3ffqghfu5iojqclrkmah2j5kgxhho0x87jtyjsjejosmf6wxo8t10zihc36dkiuv4gu1897qymtqshwaj323t7yje7zcsdzksjnpbp67gndh4hk1g5p32xwxmphcijgl4ndpug9b3xaqhpornxxw225yk5wt00h030yr3ck2t4rxvhcy3kogcmv9fixqu595kusi9t53gjmjl4sxg1mnzqz736meh9xccuwolrbzirjiuyrpm523cqujgaosz57zm1tjldlo3d9pswp4ji8w0ul9tagdpmim4sbpaoqf4o454v9hhy95cqyotz7crzav2tcrfr7tv3d53cooux9ovcbl1cvaqndzvs8us05ui6zvuwed4vmzj81jeg0ewfbqlr3qf5gkdgvghzrqhgq7sa8aca4wg5afen9mwhwkeefg1npc70w7htkm3uio2u829vd1tioncm8c10z2lv7yor3gwpame2851zko7hlc0bj7skazmfdyifemtnkjj9f1ecvfppe8g0lhrfju4v8nrii1r0ke1il2snjyfpomkgtx1w8vpt24uhhecn5f05uv0snjfbjd68daj01hu39ecxqr5j0nfll6vxqvwg0el5mquglr9du1swptx0rt86xerl1gz35xfsqukmctqylhnwnzw8c0sd3y7l3ie33gsqjpbgurd34dq6t8cppeue1ao9h5w7gz9mtd0gwth5ospe685kokh77u095in9w0qu9zxzwh88ng36h5tombia4aywc602diqrvam2zsp2fz1ahwzkwuoicaq73cazf6g', acl=[ACL(perms=31, acl_list=['ALL'], id=Id(scheme='world', id='anyone'))], flags=0) Received response(xid=8): '/test_soft_limit/node_6' Sending request(xid=9): Create(path='/test_soft_limit/node_7', data=b'eq2ol3f7jeunh3n2umjlfqa49r31ql6zqt8l9hkzcs6ls25jg6dfk0xwrkydlewlyd50glzakecl01d1bznniiwqqq06iw8zroqze9bblzm7wirkyzr3d6bxxadof2ncsm83v0rxy7ej5zfoa6vfokzvmv5wxaja97qbz4shdec6o9esn7lgawcdk54vd2yi7g1nqpuyng4h1kcnr2hlvx88fyid8588tilwkjon93fl64vrovt02ktlpkze3waaufmp3n6xmw32531gfeksfs3vhzo7xlhe5edc4r82rf68cett73n5gjzciegyg6gspxgchzlw0qomxsmh3cost7axl4id0t1s1p3atgrewogimdqbg6txwshqo123bw7h8ubbu7rs8k57iogou4ejmzkgn8sweoejkgiaaw1dogm3nr5xo1e6kxo3zqq36jr4qed4nssnnepawxi2khto9bopn0tjs2dp5hatd4wvhag3c2k6f9jpc492jyw37ymdx9ucund6mtgzil8aucpqrqzckvmm322iebigpxjk1mlhfad1xxdmwrja7e2lerm8i65oxwalikdthl9fhx4pqxtmfp5mz7eusf963eer11l6l9j96n5dundhp15kl7gnay33iaz6nrk6sy9v74529y9mkntdvny80xwpi34mb7oyj5p189kxsb1mmk6jfpvok757plwbx0zwf58wjpnc7sm2jroe9h2468w8ajygw77qbk0lcdddua425r5t03qlpgh9k431mns67j3vwruyt8hf2vgg80f8q13duh9bay3rjenv7ingevz0cz9r4fd9mljhy1nazf5dwpi5glnomz80v4d3slca9nvglmzikhww21b3xvukvtjlxh215p3dcis026xns6a9yuaax8uq05qmpuk5ugtsh5kf04p028rt584av0afk5irbzimqkhpetxnwob63e30f0o8j8s5jm3ozrn7nfrfy4gnwp3m', acl=[ACL(perms=31, acl_list=['ALL'], id=Id(scheme='world', id='anyone'))], flags=0) Received response(xid=9): '/test_soft_limit/node_7' Sending request(xid=10): Create(path='/test_soft_limit/node_8', data=b'ii4shaootga8obfvrpc7an4z8vdf2fu0zsj4a5d14pg7vyf6hsire8ivgbl8wck4hcw4g6fgwsyaj1q776axvksaj33zv2dlajkyo7d2ywhlwxtuzztejb9ekllrdlira8c2ymxopa5qdq7shcmxwu1d2rmi5gi3nuvppxux1lgztmgogw4fl3f1e8wkdd7hna3vdzc7dm3l7brwt03ctpd7z6zrcvvwfel12mug2xlmt49br3skkv7n4lxs7f5oow2zxf8v7mli7wg9kk09ia5nxbm6pjrr58n2yf0zf7zai6wycpses3sdo4v32gmqu3kpww16cyw4817rtt0my9fr8rhqum58ucdeoxhcl7s4kuktueky43vhq8khhajg47em3ag7xnvspplic94oaclgwzd8l47adb7wmfgs6ph3bssct6pndl7tby65ki03r7juoegoj5ffmy4bzf1mjkbnkcw35jb00tjqzvb7hrz9bajkoqy8dkr3a9y5y29c7vbiyz5p85pxzefcj6vkb71wsoxqeqji8cb07gsjvxviu6ytiy8iocvdndr0i1s6ma4r5jjqh0ix3te91jsc0iphd6svntm2x5cpsdo7v8j3hi2l08tfkkvgkntluf3ff8wvf0frs2225moulgu5qbksbwdwukeh63024dpp2u432zyw8i4ab1na146byu2qx5pehbj68covzp5pvdpc20z13t69q8wq197xijmuddiesbwn7j44mjitqn6972o9ss83s4lpe2pmvwm3s10gwhb2aogezwjwm56tzrfff0ugucrnrali887qgqu1ly4wqlveo4zbjbg43n2spmqpck4xxn6e19q7cv0r459nah98779an5fic74ji7ewwnltsk5m1tp254vz1zpqzy550i4d6jeiz1ajpvj2q01rccmifhqhgqvvafthau2xr5hlfs5qgn40quyn3ig51h1308jtgq7wv1jftqtlu0mn', acl=[ACL(perms=31, acl_list=['ALL'], id=Id(scheme='world', id='anyone'))], flags=0) Received response(xid=10): '/test_soft_limit/node_8' Sending request(xid=11): Create(path='/test_soft_limit/node_9', data=b'gzxf6vgfbl8elpctyk2phmk40ysc3hugkqwmsp0nzkvmse9d21db5t8un8eluq8v0jsugo2i3an1zaqljs9gpth5zrwmunnvud0ldsbuh9tlug7zuk1wavooeks0ll9wfikto0e3od5t5w0esovto91gi2cgu4bb7swgotcpbffoy0nc272d9pg53udx3pe7zz1d627ahb7ola8wm74xo83pdtnavilmzzl01nlj70ec22rclx3xvwomsjulwp41uxmbh9bldp13pu2sbvrv4t8etkkb9b2tz6iimish74yum84mjhwc3rbvu4lpqs8zwhfv6bbuknqpw9vw2q806pxdgqbciaw4945xv9h5uyclqhmku7fu1i6odwrxeo69s9hoqdak6kf1v0o856akwfw7ao8w7nfoo6ix1q4vjofevh5dnwsrb1vswxcbisonno7i10tg6wxzk3s38knsr3b82l1sr0ko44cbs0a7r2ndko9rnj6vhwx4w57fnpw0v94swsl27mdsff96nkovrx1waxjutfau3q75wselitgtg96q6t8ehf3epu1wm9l9g5uchbhz7kdceykv90gezfxvkidpx608jdhiu9fhj0q3zylannwus1ypynz7jn5nu951mkzekgeud45jvmz1p41jxp40uofsw4fuu5jwmydo51drdphobltzlaor34k7e28616u8dbbrmnjmtvk9zao2nubq1d0mpwus70zcnwz1di3kv10lfr12ygos93sl03r9gz998skqtqa41jq9dbtrddhu1mhz4sdezy3jcqzmdc3vcktok21k4njnr7uovs0mher2taaiahawxlcw9299msm2bxssijuqmhegazi8tplbmfwzih3rj5uk7qs0mbzmo6ztkzyvmurnr2okw7bfn6i1iwpjdsoe392mnto4ql7vnd1g8oplzny0hmsy6yj00gnlsx8unbhm209fcpsit5k1rjebb9cglnuj', acl=[ACL(perms=31, acl_list=['ALL'], id=Id(scheme='world', id='anyone'))], flags=0) Received response(xid=11): '/test_soft_limit/node_9' Sending request(xid=12): Create(path='/test_soft_limit/node_10', data=b'sz4axmflnznw2jp0hgl288982j6s6m7jyre8atezcqsrfxzih11zqcapr2hvpts537sdguzl6yf76b8q28ivs2a36s464ow8rie695953xpz11ooawludbb29czrdfxd8wefdemu59061jkbsc0jm5unjzg0g4iipjuv2a855n99wfqxfopqu3oqwjeei8wmg5g0dze5qpl6j9dkz4vphtlnkejf8men1ek4r78owwasjqk30ygjceleiqntsfwaohptj9520ih48r1zpjyvre3nude7mrwfdcbicf1nmo58cd2bzy0ims8mlrsl4k0jo97k5nrdxyfhqedfowj3qgvfne41smyc8lt96ov980jd91lorwdw0h73gx7sqoj9uzcst4w4owouwjdfb4pou7omd0gf4ln017h4f6azydye4bp5q3i3g7chr5pa8psqf9p3o3etd819iq73s2ccfzztdaaaau98nfg2x5ctz4kdnd1ybcs4xi3pfzjvbpyqpl7cootnr6lx7n6bllea7zenz9ahz2ap0yt0t7udgyg26x0fqtz8mgaf4tsn9g9pqk6a9lx7o8tmemo35dure97dr25cuuomc68drwjskex6y6r9x2do2phudo7gnwxz9ex31p68v7deb3mxyr231fiimyl0116a13sldsedpll64xq7t7ogjahi3a0lwtzqm5bgn14qczx9xhbfwf4ygkrrk6iiu6yn4hnjzqjssve16npkq0z6635uzuhyhvxt2tqvn5vic9bui024hmnd6d27vaguk2wp42ruo6cgyxgqh4o6s02z6w1cshju2eex0ry0ba6kyjzx01erkqt4q5mhx4wnydlhll651zk5x1kb9d1jvwy09490x2ronhc2za91qntovzbsao84p6wly3eznf1hgr3tbv5giuxhjhp3akckpj056ofwmogwevyej8udmkmy976m9mzd80xra6gs20j8tfw1adxkk6gh', acl=[ACL(perms=31, acl_list=['ALL'], id=Id(scheme='world', id='anyone'))], flags=0) Received response(xid=12): '/test_soft_limit/node_10' Sending request(xid=13): Create(path='/test_soft_limit/node_11', data=b'hkdz2e1sjdpjkak0y9l9inf0we9yo5cq6dxwvu6br92pg0kk1jb5z2umfry95qeownrzf0zxil9oi4mo57cu1xp4ersugu4gkovvwxqtiqdt0x0s23fxaqqfeq2z3dnpiyll2blwpautx3ujna5ufrxc8jy1mnm0s27yovnbwrqdrtpmfkhdrzw9brvi9rxvr6llzhrhjttgtd4p7eui9zxctbiaszbyxbyz07oo3wanivmkutgz624iuutrwk9nixofj6ktu1d5jb12uf6o6gf13emnjflht23rv74foeixzt9hokfvrko49i6u4o4twujjpu5fvvqvnytx29kzhnwfdn7pbc5f4ijlsdbzw07ngk1fzuan9flnn28rdqmjc2hct1hq8ccwyn76x2w6fasw5pq9e1jyhg2qxjk938q6sf7lhxdith189r1wekbtigjoa6x1l2lym8do4td1hye8rkluy16o05pxugojze40l3cs165b7a2fb6bk67tkg4z4j2dywoy2sobrv3cx6h0tbqtnt3x9dmqmvx79q92by6nar7zcovmnidipvcy75g31ih8vaglkg1zv8do6sm9cmdux8sn2uysru891ypl16bqj0wnbezhuwop9pkc2n4zrbdlgs3b7nv9c83xs3v6ea9pj2ng2597wqpwgxa8p0go4wc6350gw8juwiiigjkwnojd2dekfpdfl5yeoyjqfpi5ju96gefevccco32cyfengvf2aau2qtmtsiir71qjpdax8k6qke6iq4nruwc0uy7lfrt4cjl15rsnrkquzuftwu1zf17qez16ca7kvunn40nbt7jsfw5vg2tfu16r63zhoz7bq4b6sc031vrsnizb5z26qtfet4p2kee3ltya7slmm00bezil7nv8owo6bsqc1sa36sfx14dkn4p4f7hyvaqgxvexh095hvgle48gb17l9gcplfjm8h0alha5ss5rqhlm5g1ojlkj4', acl=[ACL(perms=31, acl_list=['ALL'], id=Id(scheme='world', id='anyone'))], flags=0) Received response(xid=13): '/test_soft_limit/node_11' Sending request(xid=14): Create(path='/test_soft_limit/node_12', data=b'f8lq8mjjgmp7259zy6xg7t57b8mql75b5ukql8unwtvwj4qskldh8hl8cy5sk8gf1kq3eb2bmx7e339hhded66uuhsbr85zbdzki7qzqc5zd5a5d5tz2y82flczblzg3ba8eaq2bhj8yinmowrxws2fzlp7exnstpktiv9q5glrtsdxshg3dlwi3nhazmo8vsvvnswhxwfku5a5bwrsagh45md3zq9xu4vts8qiukvsxawz32mqwduiqljo1ztymrdngyqdhmzlb4neywh2br1cyfqmia6g6yx3mqvx87lu8ijbjvldyq743hgs0iddjdmp0fymzvmej2bioz7876qwj4guzt7gfcxzp4vbwq9u2vouv2stuv7jidtl2190jzsxa3osgcbvolcgf0hebbbuvx97bk571gg0ud0kehk3mpx7wcttscoh8wbiyuvzi4ji4jc2tlw53uzxzuyj9sc4ep1ra1rk4swo2xhp2fylj0mzxvxf4b9d55k1v6jctu90749k8uncj1azab02lfvmn1eg45wfcu6xsppql1xrv4k59n8no6h5of5j2blt2i65be3mfxspvrhd5mg3dsj849ftlyh9pfiypfwwailgdnl5hgzgv78lh23o74miuyhaqhu86oa3wwz3iuwhnmj9emaqvwso88jvoqw08ios7hyil1d1uvbn4ttgppbbmg8i7pwzn20v10t24pjfb2qzrxgqdo9nbnc98t9y5dde8lk1il9z71sgl5kbkgw2k54enum6r3wyexg5k2blfdyudzhryahzebngkaon2rp35z0ru9k4l6vfounk6cdaqtdw65kp3u17g2ltoma9ij6s30g5eyedjvan8evvrxw6ps24qaeov3it20xypa02gtgeil2i9kmdu2bl62bg3ac8n3lhg46n5mky4rnsrxrt3mnj9ah4af35437oh120aua972wbxllahzv2hmyrykg5lyiqkr5qparjbepsd', acl=[ACL(perms=31, acl_list=['ALL'], id=Id(scheme='world', id='anyone'))], flags=0) Received response(xid=14): '/test_soft_limit/node_12' Sending request(xid=15): Create(path='/test_soft_limit/node_13', data=b'd5zalt1hcz5j8jsoyvgxwvh41ucdqbkhtfjqswhl8i9tchl0ikijlvrtsmkw3z3nw1z1qouh7t0v120z2cl3qg1bgjz71g3qm5ltu0s0e53ozjzz9arvl0300mx4cdwllte0jcryz955v2ttecypgu9aii61mlbh123cqb8m2nlylraln2pdxke83vwb3ibhmta2a571qhy6q3dpnyx26t8jyv95icawb9fjkivta9gi8w4oei2dsyqahi2do3v0o7lc4see7blbpfvlr7drkhin008zassn2aq65oo3ryzoxmyuqu7ofvn8xxk7l45i2q5phh6m6n6s8yv36wj11eyo2ijaxji8yhp7rg3pffd5votz4d8cy731nc6so2fbef9h161teq07ixc5rqqci893xskk5i5zgg7mxwq4m4yeut4uezwvqvz5ajr0oziqukaftb9kxh8o9i67pt31qd941kovmslxftno65nn9mekw3lczpxwvit0ycuwt4jon3u9s58esk3y4ylejp1di0n9zkome6lbr9ym6h2uvpt5nym37n6w39nq9uwa6l97bsrpfemkc0faqyutjifaqn3vqc8bltdkh0ctneviz5afp9zram32lieumodkttfwj4r6yud4yy7sfc2rxdo3ebipe44fgplj4hrgp1o980mrfkmivuwpjn68481z596tjb129k0w3gfb6ps2eqjq4o70v0o7hr0c36or34v8p2pecp4364cvfo99jdl3ytfi8z5bbmmtpryh2sqpx8o6i2fenqei4f3v8injclsgeu5grjvsn3j3a09ccagpo177hmm41m840h7fpqpqjlyy6avwxuq1en2x9cdg48gjbtyvyd8mzmdvao8hed75d3y2606gg9bo21a7pophy0d48wf70pu8mqbyyujy6jksod3o53p0id1ck0uwh69a5fq85jwsoo1yakisi7nerlxt8wu7eyds2rte85ijm6k2', acl=[ACL(perms=31, acl_list=['ALL'], id=Id(scheme='world', id='anyone'))], flags=0) Received response(xid=15): '/test_soft_limit/node_13' Sending request(xid=16): Create(path='/test_soft_limit/node_14', data=b'v1cw210okra38coz6rxzoq98dmkwcg2yn42q4vjtr912foqatqd70u12a9sxrqdl9ejq8ttb79d3hfk2hkvonlh9ydjpjbab6guwhdfmm5xip153b3lt5qosvwzxumnntud0xobancaxgh0x0t9xkx26y6dhk5gabzi8a51nivul2wegxljcfghtbjvy5zgtbe8pauqads2rj0m9nfftuym54zyc56fvrrd47ewkupa4e97jb71qdavybp6oezvbd63kvwcmsprz6df1pxtbk6vxawb9c8h3r7j4py6k3yeynucdijhfj3ey0kvpr6hagdcowrx448q28j2nu14y79y61sil9m3hipsyi29tj0j604fxji73pu5ongb8pd9ujkppqbvxofggo169hsxduh91xc1od7xrxu8lnmwc11alk72ngivg1laaiqusd5ojv8lxlmkf9sdqf0lwsoel72kxhsxwziwakxha0pjamvv604d4q223c8vn2u3eb7cpf8e7lmc4u24ts95xc5l1vv62qn7h74a9anb0qb3jckjgjalzbc28r1dzty4vozjtr413fa4oinrwuzkr46ynlha4rrgwxal9bdowto68wsssbla461j5jei1th9hkafx6ty4hyksvz3n3hltxgaubwjy64kgpkh77cc40cmw83gwq7dseaxrvfql2fvn5fuakivi6k2uiafm1122c157iu71luepzstzaw4gtl9mma85x6kupeqjzxgqj1hs9plcepugodhbfmgr43pnu1qp3hv9pgo5g46vr2oqulamm1x4r9hg89kw1rrdlrmqqu2aqxmxa1imogbxirruw5guky8eb4zqf9mt7eazatgyrlunlv5e0209te0m9eo8mt1jgfj3g7k173ock02iba347oi8lkq4uoi2dg1boyjlpgmjkxzv3sxdgj438zmjhgwobcov6jpo78f4zn5p2u17ieaneq8uhrgaoy4ozevj', acl=[ACL(perms=31, acl_list=['ALL'], id=Id(scheme='world', id='anyone'))], flags=0) Received response(xid=16): '/test_soft_limit/node_14' Sending request(xid=17): Create(path='/test_soft_limit/node_15', data=b'74ezfncjdfwxzh0ut47hzxj9lp5clrpr2hqzfm3ky79r3dznbzr984teqnciek4b23tkv95httydqbl5bq4sv6efniq24tx1ppb8j1rvrztlgova177c1eueupkylvmz8acfqbnq7dkekvqzd94bcl004075x0h8uzlu2idqnuq144fiav4bgxeo613ko7mq8lx9vqouo2q7vks36y4v5uif2755jb64a0lhfcmnwh9p1aal78i61qxa44z3529h5wub08m5hcr9evbmhb78isj9oniykgqqa2h5fwc60i4ipmdfkg1rdgeoykj5qvxttirrmayconb74y1cxvg8zi7ttezb6bo9jrofwqujk9730x7521k8w8wserhbkhwdgteoe8krju6tx5zhsu60i3gnrq5lzvh91vjcg7jb3w77s2yajhj3wnvagtmu8lhfhuxvhvdgadgma7brr3pne07uudjj5vsk0kqu0v4h9g4fu53up3nbsiizbsrvc3r55lm95h1emslax2inmcyndqyqzxbgfsf1jvii6kaoyua9ug9exm3abakhnvoecdohfh4iukmxqsgda9xdsld9z8k8htl3zpqoxjt0qa1fenwq78urd34xbiw8qeplqk0vvk7bci4o4nz5pac2flxjpux4er2oo5xy66xo3tx4htjaytjw7r2tuf1ldtq5zkz34at3htul29p54vw42302d9mug6t6bnlwxi78xplxfjmhwjbpczmx6oj6fdoe3gaudcocks9nm2irmjpw6r4r50qfjmqaxrvoxcsvc412the9zpq8xllnrho6k0kc95q2ebvrkr8zp59iv3i1c40oovzi6z5s0bff6j5pepmz617im7vi1drnjpgqb9cap6hto74dw4qztcfzhnl5kfmman2ochpnoq62rg4lfal63cc928e2eq3gpvg0eovrc1ihdz1t2mz4wvun0w6djjp5eq38rqqd0m67uximdpbo', acl=[ACL(perms=31, acl_list=['ALL'], id=Id(scheme='world', id='anyone'))], flags=0) Received response(xid=17): '/test_soft_limit/node_15' Sending request(xid=18): Create(path='/test_soft_limit/node_16', data=b'716m9zbzespc3g8sp87cbwkd4s94gfo094qtgpial5bealcvfodoi1walcoc1lgbqt1wt3q20unxy4slekn7toewu6efqs6qdysfkpmh0emcz9axx2yzefyw7sg2w36utok62ez07p6gpna3sa59xg99pegqfpfyauq3kum9ddtbevuqwi817dzwtrdzt8cb49rn1qbgk1oy4c8nfcx8a44649ldnurygim4uwx980ussb4cxjlnqtdbztzh8c9uzhibms4m9fhmb0lw7zti0lxvxjt70n9c7yexhf7g6cclovifyx8cbbaoooaas619tlox32pq4bu2vyhclm6em0p99hw486hr31gn41qirjabju8hkofvwluvd6kknf2jazd9dlutfmtou0ecybcwk7uw2g0iqip4iq5py5p98eyklz3g9p71vxbv947914bsil30o43krf10cvbii2bywwmrjtiho62kekzh5d9417pt4r5uymq6pqf41z1mh9w1ndvip3xr048g7456hvs5y20fjcrskrs8hxnic2j2bqboyg2gsynvo8r14tfti6ze8rnwoznxw35rosselpxjgc3konuxzqybh7k7wx8c5emohkq0673piu8yodvucstihv60e40oim3d7nmk214k5yl9r7vxibfpta81piegwlhnto1dfbwfg6yrovk6je8yied1idxiz1ee6fsxkr1lcfkae4d15a13i5jvagbp3jns5wdut6r8kxjhq85cvw9sg3eqi3gh1veufq9beso7fqk5l5575x3to4ks5r4r2e30fc78p8p281gwzghs5l7n5uqgxxsxs2xz4bxi1rjbxyf53h3mmjajrjflo9ogpqhjuy72yhrr00f0u6n8ds61ymrjskkhb7pwos0a42lpz88ei91bz6p9manuk4uzj61ibellmo4v9xpus5sam65c5uft0fdqwkc6hygqlo98ztwxc0v94g0d7w6s6j7z', acl=[ACL(perms=31, acl_list=['ALL'], id=Id(scheme='world', id='anyone'))], flags=0) Received error(xid=18) ConnectionLoss() Sending request(xid=19): Transaction(operations=[Delete(path='/test_soft_limit/node_0', version=-1), Delete(path='/test_soft_limit/node_1', version=-1), Delete(path='/test_soft_limit/node_2', version=-1), Delete(path='/test_soft_limit/node_3', version=-1), Delete(path='/test_soft_limit/node_4', version=-1), Delete(path='/test_soft_limit/node_5', version=-1), Delete(path='/test_soft_limit/node_6', version=-1), Delete(path='/test_soft_limit/node_7', version=-1), Delete(path='/test_soft_limit/node_8', version=-1), Delete(path='/test_soft_limit/node_9', version=-1), Create(path='/test_soft_limit/node_10000019', data=b'abcde', acl=[ACL(perms=31, acl_list=['ALL'], id=Id(scheme='world', id='anyone'))], flags=0)]) Received response(xid=19): [True, True, True, True, True, True, True, True, True, True, '/test_soft_limit/node_10000019'] Command:[docker compose --env-file /ClickHouse/tests/integration/test_keeper_memory_soft_limit/_instances-0-gw4/.env --project-name roottestkeepermemorysoftlimit-gw4 --file /ClickHouse/tests/integration/test_keeper_memory_soft_limit/_instances-0-gw4/node/docker-compose.yml --file /ClickHouse/tests/integration/helpers/../../../tests/integration/compose/docker_compose_keeper.yml stop --timeout 20] [gw4] PASSED test_keeper_memory_soft_limit/test.py::test_soft_limit_create Executing query select 20 on node1 Executing query select 20 on node1 Executing query select 20 on node1 Executing query select 20 on node1 Executing query select 20 on node1 run container_id:roottestkeeperincorrectconfig-gw7-node1-1 detach:False nothrow:False cmd: ['bash', '-c', "ps ax | grep 'clickhouse' | grep -v 'grep' | grep -v 'coproc' | grep -v 'bash -c' | awk '{print $1}'"] Command:[docker exec roottestkeeperincorrectconfig-gw7-node1-1 bash -c ps ax | grep 'clickhouse' | grep -v 'grep' | grep -v 'coproc' | grep -v 'bash -c' | awk '{print $1}'] Current start attempt failed. Will kill 4644 just in case. run container_id:roottestkeeperincorrectconfig-gw7-node1-1 detach:False nothrow:True cmd: ['bash', '-c', 'kill -9 4644'] Command:[docker exec -u root roottestkeeperincorrectconfig-gw7-node1-1 bash -c kill -9 4644] Stderr:bash: line 1: kill: (4644) - No such process Exitcode:1 run container_id:roottestkeeperincorrectconfig-gw7-node1-1 detach:False nothrow:False cmd: ['bash', '-c', "ps ax | grep 'clickhouse' | grep -v 'grep' | grep -v 'coproc' | grep -v 'bash -c' | awk '{print $1}'"] Command:[docker exec roottestkeeperincorrectconfig-gw7-node1-1 bash -c ps ax | grep 'clickhouse' | grep -v 'grep' | grep -v 'coproc' | grep -v 'bash -c' | awk '{print $1}'] No clickhouse process running. Start new one. http://localhost:None "POST /v1.46/containers/roottestkeeperincorrectconfig-gw7-node1-1/exec HTTP/1.1" 201 74 http://localhost:None "POST /v1.46/exec/2299b87aa6b489e3b396ee2ad60e770714906463ff43a3680f77dd21c1c6cc05/start HTTP/1.1" 200 0 http://localhost:None "GET /v1.46/exec/2299b87aa6b489e3b396ee2ad60e770714906463ff43a3680f77dd21c1c6cc05/json HTTP/1.1" 200 586 run container_id:roottestkeeperincorrectconfig-gw7-node1-1 detach:False nothrow:False cmd: ['bash', '-c', "ps ax | grep 'clickhouse' | grep -v 'grep' | grep -v 'coproc' | grep -v 'bash -c' | awk '{print $1}'"] Command:[docker exec roottestkeeperincorrectconfig-gw7-node1-1 bash -c ps ax | grep 'clickhouse' | grep -v 'grep' | grep -v 'coproc' | grep -v 'bash -c' | awk '{print $1}'] Stdout:5281 Clickhouse process running. run container_id:roottestkeeperincorrectconfig-gw7-node1-1 detach:False nothrow:False cmd: ['bash', '-c', "ps ax | grep 'clickhouse' | grep -v 'grep' | grep -v 'coproc' | grep -v 'bash -c' | awk '{print $1}'"] Command:[docker exec roottestkeeperincorrectconfig-gw7-node1-1 bash -c ps ax | grep 'clickhouse' | grep -v 'grep' | grep -v 'coproc' | grep -v 'bash -c' | awk '{print $1}'] Stdout:5281 Executing query select 20 on node1 Executing query select 20 on node1 Executing query select 20 on node1 Executing query select 20 on node1 Executing query select 20 on node1 Executing query select 20 on node1 Connection dropped: outstanding heartbeat ping not received Transition to CONNECTING Zookeeper connection lost Executing query select 20 on node1 Connecting to 172.16.2.4(172.16.2.4):2181, use_ssl: False Executing query select 20 on node1 Executing query select 20 on node1 Executing query select 20 on node1 run container_id:roottestkeeperincorrectconfig-gw7-node1-1 detach:False nothrow:False cmd: ['bash', '-c', "ps ax | grep 'clickhouse' | grep -v 'grep' | grep -v 'coproc' | grep -v 'bash -c' | awk '{print $1}'"] Command:[docker exec roottestkeeperincorrectconfig-gw7-node1-1 bash -c ps ax | grep 'clickhouse' | grep -v 'grep' | grep -v 'coproc' | grep -v 'bash -c' | awk '{print $1}'] Current start attempt failed. Will kill 5281 just in case. run container_id:roottestkeeperincorrectconfig-gw7-node1-1 detach:False nothrow:True cmd: ['bash', '-c', 'kill -9 5281'] Command:[docker exec -u root roottestkeeperincorrectconfig-gw7-node1-1 bash -c kill -9 5281] Stderr:bash: line 1: kill: (5281) - No such process Exitcode:1 run container_id:roottestkeeperincorrectconfig-gw7-node1-1 detach:False nothrow:False cmd: ['bash', '-c', "echo '\n\n \n 9181\n 1\n /var/lib/clickhouse/coordination/log\n /var/lib/clickhouse/coordination/snapshots\n\n \n 5000\n 10000\n trace\n \n\n \n \n 1\n node1\n 9234\n 2\n node2\n 9234\n 3\n node3\n 9234\n \n \n \n\n' > /etc/clickhouse-server/config.d/enable_keeper1.xml"] Command:[docker exec roottestkeeperincorrectconfig-gw7-node1-1 bash -c echo ' 9181 1 /var/lib/clickhouse/coordination/log /var/lib/clickhouse/coordination/snapshots 5000 10000 trace 1 node1 9234 2 node2 9234 3 node3 9234 ' > /etc/clickhouse-server/config.d/enable_keeper1.xml] run container_id:roottestkeeperincorrectconfig-gw7-node1-1 detach:False nothrow:False cmd: ['bash', '-c', "ps ax | grep 'clickhouse' | grep -v 'grep' | grep -v 'coproc' | grep -v 'bash -c' | awk '{print $1}'"] Command:[docker exec roottestkeeperincorrectconfig-gw7-node1-1 bash -c ps ax | grep 'clickhouse' | grep -v 'grep' | grep -v 'coproc' | grep -v 'bash -c' | awk '{print $1}'] No clickhouse process running. Start new one. http://localhost:None "POST /v1.46/containers/roottestkeeperincorrectconfig-gw7-node1-1/exec HTTP/1.1" 201 74 http://localhost:None "POST /v1.46/exec/7dbd2697fe406cb250dbb2b9a76ad2918dc321deafeb904e3379da53a67f116d/start HTTP/1.1" 200 0 http://localhost:None "GET /v1.46/exec/7dbd2697fe406cb250dbb2b9a76ad2918dc321deafeb904e3379da53a67f116d/json HTTP/1.1" 200 586 Connection dropped: socket connection error: No route to host Connecting to 172.16.2.4(172.16.2.4):2181, use_ssl: False run container_id:roottestkeeperincorrectconfig-gw7-node1-1 detach:False nothrow:False cmd: ['bash', '-c', "ps ax | grep 'clickhouse' | grep -v 'grep' | grep -v 'coproc' | grep -v 'bash -c' | awk '{print $1}'"] Command:[docker exec roottestkeeperincorrectconfig-gw7-node1-1 bash -c ps ax | grep 'clickhouse' | grep -v 'grep' | grep -v 'coproc' | grep -v 'bash -c' | awk '{print $1}'] Stdout:5924 Clickhouse process running. run container_id:roottestkeeperincorrectconfig-gw7-node1-1 detach:False nothrow:False cmd: ['bash', '-c', "ps ax | grep 'clickhouse' | grep -v 'grep' | grep -v 'coproc' | grep -v 'bash -c' | awk '{print $1}'"] Command:[docker exec roottestkeeperincorrectconfig-gw7-node1-1 bash -c ps ax | grep 'clickhouse' | grep -v 'grep' | grep -v 'coproc' | grep -v 'bash -c' | awk '{print $1}'] Stdout:5924 Executing query select 20 on node1 Executing query select 20 on node1 Executing query select 20 on node1 Executing query select 20 on node1 Stderr: Container roottestkeepermemorysoftlimit-gw4-node-1 Stopping Stderr: Container roottestkeepermemorysoftlimit-gw4-node-1 Stopped Stderr: Container roottestkeepermemorysoftlimit-gw4-zoo3-1 Stopping Stderr: Container roottestkeepermemorysoftlimit-gw4-zoo1-1 Stopping Stderr: Container roottestkeepermemorysoftlimit-gw4-zoo2-1 Stopping Stderr: Container roottestkeepermemorysoftlimit-gw4-zoo3-1 Stopped Stderr: Container roottestkeepermemorysoftlimit-gw4-zoo2-1 Stopped Stderr: Container roottestkeepermemorysoftlimit-gw4-zoo1-1 Stopped Command:[bash -c [ -f /ClickHouse/tests/integration/test_keeper_memory_soft_limit/_instances-0-gw4/node/logs/stderr.log ] && zgrep -aH "==================" /ClickHouse/tests/integration/test_keeper_memory_soft_limit/_instances-0-gw4/node/logs/stderr.log* | ( [ -z "" ] && cat || grep -v "$" ) || true] Command:[docker compose --env-file /ClickHouse/tests/integration/test_keeper_memory_soft_limit/_instances-0-gw4/.env --project-name roottestkeepermemorysoftlimit-gw4 --file /ClickHouse/tests/integration/test_keeper_memory_soft_limit/_instances-0-gw4/node/docker-compose.yml --file /ClickHouse/tests/integration/helpers/../../../tests/integration/compose/docker_compose_keeper.yml down --volumes] Stderr: Container roottestkeepermemorysoftlimit-gw4-node-1 Stopping Stderr: Container roottestkeepermemorysoftlimit-gw4-node-1 Stopped Stderr: Container roottestkeepermemorysoftlimit-gw4-node-1 Removing Stderr: Container roottestkeepermemorysoftlimit-gw4-node-1 Removed Stderr: Container roottestkeepermemorysoftlimit-gw4-zoo1-1 Stopping Stderr: Container roottestkeepermemorysoftlimit-gw4-zoo2-1 Stopping Stderr: Container roottestkeepermemorysoftlimit-gw4-zoo3-1 Stopping Stderr: Container roottestkeepermemorysoftlimit-gw4-zoo1-1 Stopped Stderr: Container roottestkeepermemorysoftlimit-gw4-zoo1-1 Removing Stderr: Container roottestkeepermemorysoftlimit-gw4-zoo2-1 Stopped Stderr: Container roottestkeepermemorysoftlimit-gw4-zoo2-1 Removing Stderr: Container roottestkeepermemorysoftlimit-gw4-zoo3-1 Stopped Stderr: Container roottestkeepermemorysoftlimit-gw4-zoo3-1 Removing Stderr: Container roottestkeepermemorysoftlimit-gw4-zoo3-1 Removed Stderr: Container roottestkeepermemorysoftlimit-gw4-zoo2-1 Removed Stderr: Container roottestkeepermemorysoftlimit-gw4-zoo1-1 Removed Stderr: Network roottestkeepermemorysoftlimit-gw4_default Removing Stderr: Network roottestkeepermemorysoftlimit-gw4_default Removed Cleanup called Docker networks for project roottestkeepermemorysoftlimit-gw4 are NETWORK ID NAME DRIVER SCOPE Docker containers for project roottestkeepermemorysoftlimit-gw4 are CONTAINER ID IMAGE COMMAND CREATED STATUS PORTS NAMES Executing query select 20 on node1 Docker volumes for project roottestkeepermemorysoftlimit-gw4 are DRIVER VOLUME NAME Command:[docker container list --all --filter name='^/roottestkeepermemorysoftlimit-gw4-.*-1$' --format '{{.ID}}:{{.Names}}'] Unstopped containers: {} No running containers for project: roottestkeepermemorysoftlimit-gw4 Trying to prune unused networks... Trying to prune unused images... Command:[docker image prune -f] Stdout:Total reclaimed space: 0B Images pruned Trying to prune unused volumes... Command:[docker volume ls | wc -l] Stdout:1 Volumes pruned: 1 Executing query select 20 on node1 Executing query select 20 on node1 Executing query select 20 on node1 Executing query select 20 on node1 Executing query select 20 on node1 run container_id:roottestkeeperincorrectconfig-gw7-node1-1 detach:False nothrow:False cmd: ['bash', '-c', "ps ax | grep 'clickhouse' | grep -v 'grep' | grep -v 'coproc' | grep -v 'bash -c' | awk '{print $1}'"] Command:[docker exec roottestkeeperincorrectconfig-gw7-node1-1 bash -c ps ax | grep 'clickhouse' | grep -v 'grep' | grep -v 'coproc' | grep -v 'bash -c' | awk '{print $1}'] Current start attempt failed. Will kill 5924 just in case. run container_id:roottestkeeperincorrectconfig-gw7-node1-1 detach:False nothrow:True cmd: ['bash', '-c', 'kill -9 5924'] Command:[docker exec -u root roottestkeeperincorrectconfig-gw7-node1-1 bash -c kill -9 5924] Stderr:bash: line 1: kill: (5924) - No such process Exitcode:1 run container_id:roottestkeeperincorrectconfig-gw7-node1-1 detach:False nothrow:False cmd: ['bash', '-c', "ps ax | grep 'clickhouse' | grep -v 'grep' | grep -v 'coproc' | grep -v 'bash -c' | awk '{print $1}'"] Command:[docker exec roottestkeeperincorrectconfig-gw7-node1-1 bash -c ps ax | grep 'clickhouse' | grep -v 'grep' | grep -v 'coproc' | grep -v 'bash -c' | awk '{print $1}'] No clickhouse process running. Start new one. http://localhost:None "POST /v1.46/containers/roottestkeeperincorrectconfig-gw7-node1-1/exec HTTP/1.1" 201 74 http://localhost:None "POST /v1.46/exec/1fcadc9c4507dbd102671e525f46d252878d973b4590154c464dee19b9ff4e27/start HTTP/1.1" 200 0 http://localhost:None "GET /v1.46/exec/1fcadc9c4507dbd102671e525f46d252878d973b4590154c464dee19b9ff4e27/json HTTP/1.1" 200 586 run container_id:roottestkeeperincorrectconfig-gw7-node1-1 detach:False nothrow:False cmd: ['bash', '-c', "ps ax | grep 'clickhouse' | grep -v 'grep' | grep -v 'coproc' | grep -v 'bash -c' | awk '{print $1}'"] Command:[docker exec roottestkeeperincorrectconfig-gw7-node1-1 bash -c ps ax | grep 'clickhouse' | grep -v 'grep' | grep -v 'coproc' | grep -v 'bash -c' | awk '{print $1}'] Stdout:6561 Clickhouse process running. run container_id:roottestkeeperincorrectconfig-gw7-node1-1 detach:False nothrow:False cmd: ['bash', '-c', "ps ax | grep 'clickhouse' | grep -v 'grep' | grep -v 'coproc' | grep -v 'bash -c' | awk '{print $1}'"] Command:[docker exec roottestkeeperincorrectconfig-gw7-node1-1 bash -c ps ax | grep 'clickhouse' | grep -v 'grep' | grep -v 'coproc' | grep -v 'bash -c' | awk '{print $1}'] Stdout:6561 Executing query select 20 on node1 Executing query select 20 on node1 Executing query select 20 on node1 Executing query select 20 on node1 Executing query select 20 on node1 Executing query select 20 on node1 Executing query select 20 on node1 Connection dropped: socket connection error: None Executing query select 20 on node1 Executing query select 20 on node1 Executing query select 20 on node1 run container_id:roottestkeeperincorrectconfig-gw7-node1-1 detach:False nothrow:False cmd: ['bash', '-c', "ps ax | grep 'clickhouse' | grep -v 'grep' | grep -v 'coproc' | grep -v 'bash -c' | awk '{print $1}'"] Command:[docker exec roottestkeeperincorrectconfig-gw7-node1-1 bash -c ps ax | grep 'clickhouse' | grep -v 'grep' | grep -v 'coproc' | grep -v 'bash -c' | awk '{print $1}'] Current start attempt failed. Will kill 6561 just in case. run container_id:roottestkeeperincorrectconfig-gw7-node1-1 detach:False nothrow:True cmd: ['bash', '-c', 'kill -9 6561'] Command:[docker exec -u root roottestkeeperincorrectconfig-gw7-node1-1 bash -c kill -9 6561] Stderr:bash: line 1: kill: (6561) - No such process Exitcode:1 run container_id:roottestkeeperincorrectconfig-gw7-node1-1 detach:False nothrow:False cmd: ['bash', '-c', "echo '\n\n \n 9181\n 1\n /var/lib/clickhouse/coordination/log\n /var/lib/clickhouse/coordination/snapshots\n\n \n 5000\n 10000\n trace\n \n\n \n \n 1\n node1\n 9234\n \n \n \n\n' > /etc/clickhouse-server/config.d/enable_keeper1.xml"] Command:[docker exec roottestkeeperincorrectconfig-gw7-node1-1 bash -c echo ' 9181 1 /var/lib/clickhouse/coordination/log /var/lib/clickhouse/coordination/snapshots 5000 10000 trace 1 node1 9234 ' > /etc/clickhouse-server/config.d/enable_keeper1.xml] run container_id:roottestkeeperincorrectconfig-gw7-node1-1 detach:False nothrow:False cmd: ['bash', '-c', "ps ax | grep 'clickhouse' | grep -v 'grep' | grep -v 'coproc' | grep -v 'bash -c' | awk '{print $1}'"] Command:[docker exec roottestkeeperincorrectconfig-gw7-node1-1 bash -c ps ax | grep 'clickhouse' | grep -v 'grep' | grep -v 'coproc' | grep -v 'bash -c' | awk '{print $1}'] No clickhouse process running. Start new one. http://localhost:None "POST /v1.46/containers/roottestkeeperincorrectconfig-gw7-node1-1/exec HTTP/1.1" 201 74 http://localhost:None "POST /v1.46/exec/674a270e11346ca633d6a33b1a37296d8f8dfcfa60acce3e4756df064faa57a7/start HTTP/1.1" 200 0 http://localhost:None "GET /v1.46/exec/674a270e11346ca633d6a33b1a37296d8f8dfcfa60acce3e4756df064faa57a7/json HTTP/1.1" 200 586 run container_id:roottestkeeperincorrectconfig-gw7-node1-1 detach:False nothrow:False cmd: ['bash', '-c', "ps ax | grep 'clickhouse' | grep -v 'grep' | grep -v 'coproc' | grep -v 'bash -c' | awk '{print $1}'"] Command:[docker exec roottestkeeperincorrectconfig-gw7-node1-1 bash -c ps ax | grep 'clickhouse' | grep -v 'grep' | grep -v 'coproc' | grep -v 'bash -c' | awk '{print $1}'] Stdout:7204 Clickhouse process running. run container_id:roottestkeeperincorrectconfig-gw7-node1-1 detach:False nothrow:False cmd: ['bash', '-c', "ps ax | grep 'clickhouse' | grep -v 'grep' | grep -v 'coproc' | grep -v 'bash -c' | awk '{print $1}'"] Command:[docker exec roottestkeeperincorrectconfig-gw7-node1-1 bash -c ps ax | grep 'clickhouse' | grep -v 'grep' | grep -v 'coproc' | grep -v 'bash -c' | awk '{print $1}'] Stdout:7204 Executing query select 20 on node1 Executing query select 20 on node1 Executing query select 20 on node1 Executing query select 20 on node1 Executing query select 20 on node1 Executing query SELECT 1 on node1 Command:[docker compose --env-file /ClickHouse/tests/integration/test_keeper_incorrect_config/_instances-0-gw7/.env --project-name roottestkeeperincorrectconfig-gw7 --file /ClickHouse/tests/integration/test_keeper_incorrect_config/_instances-0-gw7/node1/docker-compose.yml stop --timeout 20] [gw7] PASSED test_keeper_incorrect_config/test.py::test_invalid_configs Stderr: Container roottestkeeperincorrectconfig-gw7-node1-1 Stopping Stderr: Container roottestkeeperincorrectconfig-gw7-node1-1 Stopped Command:[bash -c [ -f /ClickHouse/tests/integration/test_keeper_incorrect_config/_instances-0-gw7/node1/logs/stderr.log ] && zgrep -aH "==================" /ClickHouse/tests/integration/test_keeper_incorrect_config/_instances-0-gw7/node1/logs/stderr.log* | ( [ -z "" ] && cat || grep -v "$" ) || true] Command:[docker compose --env-file /ClickHouse/tests/integration/test_keeper_incorrect_config/_instances-0-gw7/.env --project-name roottestkeeperincorrectconfig-gw7 --file /ClickHouse/tests/integration/test_keeper_incorrect_config/_instances-0-gw7/node1/docker-compose.yml down --volumes] Stderr: Container roottestkeeperincorrectconfig-gw7-node1-1 Stopping Stderr: Container roottestkeeperincorrectconfig-gw7-node1-1 Stopped Stderr: Container roottestkeeperincorrectconfig-gw7-node1-1 Removing Stderr: Container roottestkeeperincorrectconfig-gw7-node1-1 Removed Stderr: Network roottestkeeperincorrectconfig-gw7_default Removing Stderr: Network roottestkeeperincorrectconfig-gw7_default Removed Cleanup called Docker networks for project roottestkeeperincorrectconfig-gw7 are NETWORK ID NAME DRIVER SCOPE Docker containers for project roottestkeeperincorrectconfig-gw7 are CONTAINER ID IMAGE COMMAND CREATED STATUS PORTS NAMES Docker volumes for project roottestkeeperincorrectconfig-gw7 are DRIVER VOLUME NAME Command:[docker container list --all --filter name='^/roottestkeeperincorrectconfig-gw7-.*-1$' --format '{{.ID}}:{{.Names}}'] Unstopped containers: {} No running containers for project: roottestkeeperincorrectconfig-gw7 Trying to prune unused networks... Trying to prune unused images... Command:[docker image prune -f] Stdout:Total reclaimed space: 0B Images pruned Trying to prune unused volumes... Command:[docker volume ls | wc -l] Stdout:1 Volumes pruned: 1 ============================== slowest durations =============================== 93.85s call test_keeper_incorrect_config/test.py::test_invalid_configs 33.61s call test_jbod_ha/test.py::test_jbod_ha 23.79s call test_keeper_broken_logs/test.py::test_single_node_broken_log 21.59s teardown test_encrypted_disk/test.py::test_restart 20.06s setup test_file_cluster/test.py::test_count 19.06s setup test_keeper_client/test.py::test_base_commands 18.73s setup test_keeper_memory_soft_limit/test.py::test_soft_limit_create 18.30s setup test_fetch_partition_from_auxiliary_zookeeper/test.py::test_fetch_part_from_allowed_zookeeper[PART-2020-08-28-20200828_0_0_0] 17.67s setup test_jbod_ha/test.py::test_jbod_ha 17.15s setup test_encrypted_disk/test.py::test_add_keys 17.08s setup test_keeper_availability_zone/test.py::test_get_availability_zone 17.00s setup test_drop_replica_with_auxiliary_zookeepers/test.py::test_drop_replica_in_auxiliary_zookeeper 16.82s teardown test_keeper_memory_soft_limit/test.py::test_soft_limit_create 16.45s setup test_hedged_requests_parallel/test.py::test_combination1 15.69s setup test_keeper_incorrect_config/test.py::test_invalid_configs 15.54s call test_encrypted_disk/test.py::test_restart 15.06s setup test_external_http_authenticator/test.py::test_basic_auth_failed 14.91s setup test_format_schema_on_server/test.py::test_drop_cache_protobuf_format 14.73s setup test_insert_distributed_async_send/test.py::test_insert_distributed_async_send_corrupted_big[0] 13.41s setup test_http_native/test.py::test_http_native_returns_timezone 13.38s setup test_grpc_protocol_ssl/test.py::test_insecure_channel 13.16s setup test_graphite_merge_tree/test.py::test_broken_partial_rollup 13.10s call test_hedged_requests_parallel/test.py::test_query_with_no_data_to_sample 13.07s setup test_keeper_broken_logs/test.py::test_single_node_broken_log 12.94s setup test_explain_estimates/test.py::test_explain_estimates 12.88s setup test_jbod_load_balancing/test.py::test_jbod_load_balancing_least_used 11.79s teardown test_fetch_partition_from_auxiliary_zookeeper/test.py::test_fetch_part_from_allowed_zookeeper[PARTITION-2020-08-27-2020-08-27] 11.68s call test_hedged_requests_parallel/test.py::test_send_data 10.46s call test_hedged_requests_parallel/test.py::test_combination2 10.14s call test_hedged_requests_parallel/test.py::test_combination1 9.55s call test_hedged_requests_parallel/test.py::test_send_table_status_sleep 9.47s call test_drop_replica_with_auxiliary_zookeepers/test.py::test_drop_replica_in_auxiliary_zookeeper 9.06s teardown test_file_cluster/test.py::test_select_all 8.91s setup test_input_format_parallel_parsing_memory_tracking/test.py::test_memory_tracking_total 8.84s call test_graphite_merge_tree/test.py::test_path_dangling_pointer 8.75s setup test_http_and_readonly/test.py::test_http_get_is_readonly 6.77s call test_insert_distributed_async_send/test.py::test_insert_distributed_async_send_corrupted_small[1] 6.70s teardown test_hedged_requests_parallel/test.py::test_send_table_status_sleep 6.57s call test_insert_distributed_async_send/test.py::test_insert_distributed_async_send_different_header[0] 6.55s teardown test_http_and_readonly/test.py::test_http_get_is_readonly 6.52s call test_insert_distributed_async_send/test.py::test_insert_distributed_async_send_different_header[1] 6.44s teardown test_insert_distributed_async_send/test.py::test_insert_distributed_async_send_truncated_2[1] 6.41s call test_insert_distributed_async_send/test.py::test_insert_distributed_async_send_corrupted_big[1] 6.13s call test_insert_distributed_async_send/test.py::test_insert_distributed_async_send_truncated_1[1-0] 5.68s call test_insert_distributed_async_send/test.py::test_insert_distributed_async_send_corrupted_big[0] 5.63s call test_insert_distributed_async_send/test.py::test_insert_distributed_async_send_truncated_2[0] 5.62s teardown test_jbod_load_balancing/test.py::test_jbod_load_balancing_round_robin 5.58s call test_insert_distributed_async_send/test.py::test_insert_distributed_async_send_success[0] 5.56s call test_insert_distributed_async_send/test.py::test_insert_distributed_async_send_truncated_2[1] 5.56s teardown test_format_schema_on_server/test.py::test_protobuf_format_output 5.52s call test_insert_distributed_async_send/test.py::test_insert_distributed_async_send_truncated_1[0-1] 5.43s call test_insert_distributed_async_send/test.py::test_insert_distributed_async_send_truncated_1[0-0] 5.24s call test_insert_distributed_async_send/test.py::test_insert_distributed_async_send_success[1] 5.22s call test_insert_distributed_async_send/test.py::test_insert_distributed_async_send_corrupted_small[0] 5.22s call test_insert_distributed_async_send/test.py::test_insert_distributed_async_send_truncated_1[1-1] 4.77s call test_external_http_authenticator/test.py::test_session_settings_from_auth_response 4.43s call test_encrypted_disk/test.py::test_add_keys_with_id 4.24s call test_encrypted_disk/test.py::test_add_keys 4.23s teardown test_graphite_merge_tree/test.py::test_wrong_rollup_config 4.20s teardown test_http_native/test.py::test_http_native_returns_timezone 3.72s call test_input_format_parallel_parsing_memory_tracking/test.py::test_memory_tracking_total 3.71s call test_file_cluster/test.py::test_format_detection 3.37s call test_encrypted_disk/test.py::test_part_move[local_policy-destination_disks0] 3.24s teardown test_explain_estimates/test.py::test_explain_estimates 3.11s call test_encrypted_disk/test.py::test_log_family 3.07s teardown test_external_http_authenticator/test.py::test_user_from_config_basic_auth_pass 3.00s teardown test_grpc_protocol_ssl/test.py::test_wrong_client_certificate 2.77s teardown test_input_format_parallel_parsing_memory_tracking/test.py::test_memory_tracking_total 2.70s call test_encrypted_disk/test.py::test_backup_restore[S3-encrypted_policy-s3_encrypted_default_path-False] 2.56s call test_encrypted_disk/test.py::test_migration_from_old_version[version_1be] 2.54s call test_encrypted_disk/test.py::test_migration_from_old_version[version_1le] 2.49s teardown test_keeper_client/test.py::test_set_without_version 2.48s call test_encrypted_disk/test.py::test_migration_from_old_version[version_2] 2.41s call test_encrypted_disk/test.py::test_backup_restore[S3-encrypted_policy-encrypted_policy-False] 2.39s call test_encrypted_disk/test.py::test_optimize_table[s3_policy-disk_s3_encrypted] 2.35s teardown test_jbod_ha/test.py::test_jbod_ha 2.22s teardown test_drop_replica_with_auxiliary_zookeepers/test.py::test_drop_replica_in_auxiliary_zookeeper 2.20s call test_encrypted_disk/test.py::test_part_move[s3_policy-destination_disks1] 2.19s call test_grpc_protocol_ssl/test.py::test_insecure_channel 2.08s teardown test_keeper_availability_zone/test.py::test_get_availability_zone 2.03s call test_encrypted_disk/test.py::test_backup_restore[File-s3_encrypted_default_path-encrypted_policy-False] 2.00s call test_grpc_protocol_ssl/test.py::test_wrong_client_certificate 1.97s call test_file_cluster/test.py::test_schema_inference 1.95s call test_encrypted_disk/test.py::test_optimize_table[local_policy-disk_local_encrypted] 1.94s call test_encrypted_disk/test.py::test_backup_restore[S3-s3_encrypted_default_path-encrypted_policy-False] 1.89s call test_format_schema_on_server/test.py::test_drop_capn_proto_format 1.86s call test_fetch_partition_from_auxiliary_zookeeper/test.py::test_fetch_part_from_allowed_zookeeper[PART-2020-08-28-20200828_0_0_0] 1.68s call test_encrypted_disk/test.py::test_backup_restore[File-encrypted_policy-local_policy-True] 1.67s call test_encrypted_disk/test.py::test_backup_restore[File-local_policy-encrypted_policy-False] 1.66s call test_encrypted_disk/test.py::test_backup_restore[S3-s3_encrypted_default_path-s3_encrypted_default_path-False] 1.66s call test_jbod_load_balancing/test.py::test_jbod_load_balancing_least_used_next_disk 1.65s call test_jbod_load_balancing/test.py::test_jbod_load_balancing_least_used_detect_background_changes 1.61s call test_encrypted_disk/test.py::test_backup_restore[File-encrypted_policy-local_policy-False] 1.56s call test_format_schema_on_server/test.py::test_drop_cache_protobuf_format 1.41s teardown test_keeper_broken_logs/test.py::test_single_node_broken_log 1.41s call test_fetch_partition_from_auxiliary_zookeeper/test.py::test_fetch_part_from_allowed_zookeeper[PARTITION-2020-08-27-2020-08-27] 1.38s call test_graphite_merge_tree/test.py::test_combined_rules_with_default 1.34s call test_encrypted_disk/test.py::test_encrypted_disk[encrypted_policy] 1.29s call test_encrypted_disk/test.py::test_encrypted_disk[s3_policy] 1.29s call test_encrypted_disk/test.py::test_encrypted_disk[encrypted_policy_key192b] 1.24s call test_encrypted_disk/test.py::test_encrypted_disk[local_policy] 1.22s call test_jbod_load_balancing/test.py::test_jbod_load_balancing_least_used 1.22s call test_graphite_merge_tree/test.py::test_broken_partial_rollup 1.21s call test_encrypted_disk/test.py::test_read_in_order 1.17s call test_graphite_merge_tree/test.py::test_rollup_versions 1.13s call test_explain_estimates/test.py::test_explain_estimates 1.11s call test_file_cluster/test.py::test_missing_file 0.92s call test_graphite_merge_tree/test.py::test_combined_rules 0.91s teardown test_keeper_incorrect_config/test.py::test_invalid_configs 0.87s call test_external_http_authenticator/test.py::test_user_create_basic_auth_pass 0.80s call test_jbod_load_balancing/test.py::test_jbod_load_balancing_round_robin 0.70s call test_graphite_merge_tree/test.py::test_multiple_output_blocks 0.65s call test_graphite_merge_tree/test.py::test_system_graphite_retentions 0.63s call test_graphite_merge_tree/test.py::test_rollup_aggregation 0.61s call test_format_schema_on_server/test.py::test_protobuf_format_input 0.61s call test_format_schema_on_server/test.py::test_protobuf_format_output 0.53s call test_external_http_authenticator/test.py::test_user_from_config_basic_auth_pass 0.53s call test_file_cluster/test.py::test_count 0.53s call test_graphite_merge_tree/test.py::test_paths_not_matching_any_pattern 0.43s call test_file_cluster/test.py::test_select_all 0.43s call test_file_cluster/test.py::test_no_such_files 0.42s teardown test_encrypted_disk/test.py::test_log_family 0.37s call test_keeper_availability_zone/test.py::test_get_availability_zone 0.34s call test_keeper_memory_soft_limit/test.py::test_soft_limit_create 0.32s call test_graphite_merge_tree/test.py::test_wrong_rollup_config 0.27s call test_graphite_merge_tree/test.py::test_multiple_paths_and_versions 0.27s teardown test_encrypted_disk/test.py::test_part_move[s3_policy-destination_disks1] 0.27s teardown test_graphite_merge_tree/test.py::test_system_graphite_retentions 0.27s call test_external_http_authenticator/test.py::test_basic_auth_failed 0.27s call test_file_cluster/test.py::test_non_existent_cluster 0.27s setup test_graphite_merge_tree/test.py::test_wrong_rollup_config 0.27s setup test_graphite_merge_tree/test.py::test_path_dangling_pointer 0.27s call test_graphite_merge_tree/test.py::test_rollup_aggregation_2 0.27s teardown test_encrypted_disk/test.py::test_optimize_table[s3_policy-disk_s3_encrypted] 0.24s call test_grpc_protocol_ssl/test.py::test_secure_channel 0.22s teardown test_graphite_merge_tree/test.py::test_broken_partial_rollup 0.22s teardown test_graphite_merge_tree/test.py::test_rollup_versions 0.22s teardown test_graphite_merge_tree/test.py::test_combined_rules 0.22s setup test_graphite_merge_tree/test.py::test_combined_rules 0.22s teardown test_encrypted_disk/test.py::test_backup_restore[S3-encrypted_policy-encrypted_policy-False] 0.22s setup test_graphite_merge_tree/test.py::test_system_graphite_retentions 0.22s teardown test_graphite_merge_tree/test.py::test_rollup_aggregation_2 0.22s teardown test_encrypted_disk/test.py::test_backup_restore[File-local_policy-encrypted_policy-False] 0.22s setup test_graphite_merge_tree/test.py::test_rollup_aggregation 0.22s teardown test_graphite_merge_tree/test.py::test_multiple_output_blocks 0.22s teardown test_encrypted_disk/test.py::test_encrypted_disk[encrypted_policy] 0.22s setup test_graphite_merge_tree/test.py::test_multiple_output_blocks 0.22s teardown test_encrypted_disk/test.py::test_backup_restore[File-s3_encrypted_default_path-encrypted_policy-False] 0.22s teardown test_encrypted_disk/test.py::test_add_keys_with_id 0.22s teardown test_graphite_merge_tree/test.py::test_combined_rules_with_default 0.22s setup test_graphite_merge_tree/test.py::test_multiple_paths_and_versions 0.22s setup test_graphite_merge_tree/test.py::test_paths_not_matching_any_pattern 0.22s teardown test_encrypted_disk/test.py::test_encrypted_disk[encrypted_policy_key192b] 0.22s teardown test_graphite_merge_tree/test.py::test_multiple_paths_and_versions 0.22s teardown test_encrypted_disk/test.py::test_backup_restore[S3-encrypted_policy-s3_encrypted_default_path-False] 0.22s teardown test_encrypted_disk/test.py::test_add_keys 0.22s teardown test_encrypted_disk/test.py::test_backup_restore[File-encrypted_policy-local_policy-True] 0.22s setup test_graphite_merge_tree/test.py::test_combined_rules_with_default 0.22s teardown test_encrypted_disk/test.py::test_encrypted_disk[s3_policy] 0.22s teardown test_encrypted_disk/test.py::test_backup_restore[S3-s3_encrypted_default_path-encrypted_policy-False] 0.22s teardown test_encrypted_disk/test.py::test_migration_from_old_version[version_1le] 0.22s teardown test_graphite_merge_tree/test.py::test_paths_not_matching_any_pattern 0.22s setup test_graphite_merge_tree/test.py::test_rollup_aggregation_2 0.22s setup test_graphite_merge_tree/test.py::test_rollup_versions 0.22s teardown test_encrypted_disk/test.py::test_backup_restore[S3-s3_encrypted_default_path-s3_encrypted_default_path-False] 0.22s teardown test_encrypted_disk/test.py::test_migration_from_old_version[version_2] 0.22s teardown test_graphite_merge_tree/test.py::test_rollup_aggregation 0.22s teardown test_encrypted_disk/test.py::test_read_in_order 0.22s teardown test_encrypted_disk/test.py::test_backup_restore[File-encrypted_policy-local_policy-False] 0.22s teardown test_encrypted_disk/test.py::test_optimize_table[local_policy-disk_local_encrypted] 0.22s teardown test_encrypted_disk/test.py::test_encrypted_disk[local_policy] 0.22s teardown test_encrypted_disk/test.py::test_part_move[local_policy-destination_disks0] 0.22s teardown test_encrypted_disk/test.py::test_migration_from_old_version[version_1be] 0.22s teardown test_graphite_merge_tree/test.py::test_path_dangling_pointer 0.12s call test_http_and_readonly/test.py::test_http_get_is_readonly 0.11s setup test_keeper_client/test.py::test_delete_stale_backups 0.10s setup test_keeper_client/test.py::test_quoted_argument_parsing 0.09s setup test_keeper_client/test.py::test_get_all_children_number 0.09s call test_http_native/test.py::test_http_native_returns_timezone 0.08s setup test_keeper_client/test.py::test_four_letter_word_commands 0.08s setup test_keeper_client/test.py::test_set_without_version 0.08s setup test_keeper_client/test.py::test_rm_without_version 0.08s setup test_keeper_client/test.py::test_find_super_nodes 0.08s setup test_keeper_client/test.py::test_big_family 0.08s setup test_keeper_client/test.py::test_set_with_version 0.08s setup test_keeper_client/test.py::test_rm_with_version 0.06s teardown test_keeper_client/test.py::test_get_all_children_number 0.06s teardown test_keeper_client/test.py::test_rm_without_version 0.05s teardown test_keeper_client/test.py::test_four_letter_word_commands 0.05s teardown test_keeper_client/test.py::test_quoted_argument_parsing 0.05s teardown test_keeper_client/test.py::test_set_with_version 0.05s teardown test_keeper_client/test.py::test_delete_stale_backups 0.05s teardown test_keeper_client/test.py::test_rm_with_version 0.05s teardown test_keeper_client/test.py::test_big_family 0.04s teardown test_keeper_client/test.py::test_find_super_nodes 0.04s teardown test_keeper_client/test.py::test_base_commands 0.04s call test_keeper_client/test.py::test_find_super_nodes 0.04s call test_keeper_client/test.py::test_big_family 0.03s call test_keeper_client/test.py::test_get_all_children_number 0.03s call test_keeper_client/test.py::test_delete_stale_backups 0.02s call test_keeper_client/test.py::test_set_without_version 0.02s call test_keeper_client/test.py::test_base_commands 0.02s call test_keeper_client/test.py::test_quoted_argument_parsing 0.01s call test_keeper_client/test.py::test_rm_without_version 0.01s call test_keeper_client/test.py::test_set_with_version 0.01s call test_keeper_client/test.py::test_rm_with_version 0.00s setup test_encrypted_disk/test.py::test_backup_restore[File-encrypted_policy-local_policy-False] 0.00s call test_keeper_client/test.py::test_four_letter_word_commands 0.00s setup test_format_schema_on_server/test.py::test_protobuf_format_input 0.00s setup test_encrypted_disk/test.py::test_backup_restore[S3-encrypted_policy-encrypted_policy-False] 0.00s setup test_encrypted_disk/test.py::test_backup_restore[File-s3_encrypted_default_path-encrypted_policy-False] 0.00s setup test_encrypted_disk/test.py::test_backup_restore[S3-s3_encrypted_default_path-s3_encrypted_default_path-False] 0.00s setup test_encrypted_disk/test.py::test_encrypted_disk[local_policy] 0.00s setup test_encrypted_disk/test.py::test_backup_restore[S3-s3_encrypted_default_path-encrypted_policy-False] 0.00s setup test_encrypted_disk/test.py::test_optimize_table[s3_policy-disk_s3_encrypted] 0.00s setup test_encrypted_disk/test.py::test_backup_restore[File-encrypted_policy-local_policy-True] 0.00s setup test_encrypted_disk/test.py::test_migration_from_old_version[version_1be] 0.00s setup test_encrypted_disk/test.py::test_add_keys_with_id 0.00s setup test_encrypted_disk/test.py::test_encrypted_disk[encrypted_policy_key192b] 0.00s setup test_encrypted_disk/test.py::test_migration_from_old_version[version_2] 0.00s setup test_encrypted_disk/test.py::test_backup_restore[File-local_policy-encrypted_policy-False] 0.00s teardown test_hedged_requests_parallel/test.py::test_combination1 0.00s setup test_fetch_partition_from_auxiliary_zookeeper/test.py::test_fetch_part_from_allowed_zookeeper[PARTITION-2020-08-27-2020-08-27] 0.00s setup test_encrypted_disk/test.py::test_optimize_table[local_policy-disk_local_encrypted] 0.00s setup test_encrypted_disk/test.py::test_migration_from_old_version[version_1le] 0.00s setup test_encrypted_disk/test.py::test_backup_restore[S3-encrypted_policy-s3_encrypted_default_path-False] 0.00s teardown test_file_cluster/test.py::test_count 0.00s setup test_encrypted_disk/test.py::test_encrypted_disk[encrypted_policy] 0.00s setup test_insert_distributed_async_send/test.py::test_insert_distributed_async_send_corrupted_big[1] 0.00s setup test_encrypted_disk/test.py::test_part_move[local_policy-destination_disks0] 0.00s teardown test_fetch_partition_from_auxiliary_zookeeper/test.py::test_fetch_part_from_allowed_zookeeper[PART-2020-08-28-20200828_0_0_0] 0.00s setup test_insert_distributed_async_send/test.py::test_insert_distributed_async_send_truncated_1[0-0] 0.00s setup test_insert_distributed_async_send/test.py::test_insert_distributed_async_send_truncated_1[0-1] 0.00s setup test_encrypted_disk/test.py::test_part_move[s3_policy-destination_disks1] 0.00s setup test_insert_distributed_async_send/test.py::test_insert_distributed_async_send_truncated_1[1-0] 0.00s setup test_encrypted_disk/test.py::test_log_family 0.00s setup test_insert_distributed_async_send/test.py::test_insert_distributed_async_send_corrupted_small[0] 0.00s setup test_file_cluster/test.py::test_non_existent_cluster 0.00s setup test_insert_distributed_async_send/test.py::test_insert_distributed_async_send_success[0] 0.00s setup test_encrypted_disk/test.py::test_encrypted_disk[s3_policy] 0.00s teardown test_insert_distributed_async_send/test.py::test_insert_distributed_async_send_truncated_1[0-0] 0.00s setup test_insert_distributed_async_send/test.py::test_insert_distributed_async_send_corrupted_small[1] 0.00s setup test_insert_distributed_async_send/test.py::test_insert_distributed_async_send_truncated_1[1-1] 0.00s setup test_encrypted_disk/test.py::test_read_in_order 0.00s setup test_insert_distributed_async_send/test.py::test_insert_distributed_async_send_success[1] 0.00s teardown test_hedged_requests_parallel/test.py::test_send_data 0.00s teardown test_insert_distributed_async_send/test.py::test_insert_distributed_async_send_corrupted_big[0] 0.00s setup test_format_schema_on_server/test.py::test_protobuf_format_output 0.00s teardown test_insert_distributed_async_send/test.py::test_insert_distributed_async_send_corrupted_small[0] 0.00s setup test_insert_distributed_async_send/test.py::test_insert_distributed_async_send_truncated_2[1] 0.00s setup test_encrypted_disk/test.py::test_restart 0.00s setup test_insert_distributed_async_send/test.py::test_insert_distributed_async_send_truncated_2[0] 0.00s setup test_hedged_requests_parallel/test.py::test_send_table_status_sleep 0.00s setup test_hedged_requests_parallel/test.py::test_combination2 0.00s setup test_hedged_requests_parallel/test.py::test_send_data 0.00s teardown test_hedged_requests_parallel/test.py::test_query_with_no_data_to_sample 0.00s teardown test_grpc_protocol_ssl/test.py::test_insecure_channel 0.00s setup test_external_http_authenticator/test.py::test_session_settings_from_auth_response 0.00s setup test_file_cluster/test.py::test_missing_file 0.00s setup test_grpc_protocol_ssl/test.py::test_secure_channel 0.00s setup test_insert_distributed_async_send/test.py::test_insert_distributed_async_send_different_header[1] 0.00s setup test_jbod_load_balancing/test.py::test_jbod_load_balancing_least_used_next_disk 0.00s teardown test_insert_distributed_async_send/test.py::test_insert_distributed_async_send_success[0] 0.00s setup test_file_cluster/test.py::test_no_such_files 0.00s teardown test_external_http_authenticator/test.py::test_basic_auth_failed 0.00s teardown test_insert_distributed_async_send/test.py::test_insert_distributed_async_send_success[1] 0.00s setup test_format_schema_on_server/test.py::test_drop_capn_proto_format 0.00s setup test_external_http_authenticator/test.py::test_user_create_basic_auth_pass 0.00s setup test_external_http_authenticator/test.py::test_user_from_config_basic_auth_pass 0.00s teardown test_insert_distributed_async_send/test.py::test_insert_distributed_async_send_truncated_1[1-1] 0.00s teardown test_insert_distributed_async_send/test.py::test_insert_distributed_async_send_different_header[1] 0.00s teardown test_insert_distributed_async_send/test.py::test_insert_distributed_async_send_different_header[0] 0.00s teardown test_file_cluster/test.py::test_no_such_files 0.00s setup test_file_cluster/test.py::test_format_detection 0.00s teardown test_format_schema_on_server/test.py::test_drop_cache_protobuf_format 0.00s teardown test_external_http_authenticator/test.py::test_session_settings_from_auth_response 0.00s teardown test_external_http_authenticator/test.py::test_user_create_basic_auth_pass 0.00s teardown test_insert_distributed_async_send/test.py::test_insert_distributed_async_send_truncated_1[1-0] 0.00s setup test_insert_distributed_async_send/test.py::test_insert_distributed_async_send_different_header[0] 0.00s teardown test_insert_distributed_async_send/test.py::test_insert_distributed_async_send_truncated_1[0-1] 0.00s setup test_file_cluster/test.py::test_select_all 0.00s setup test_hedged_requests_parallel/test.py::test_query_with_no_data_to_sample 0.00s setup test_jbod_load_balancing/test.py::test_jbod_load_balancing_round_robin 0.00s setup test_grpc_protocol_ssl/test.py::test_wrong_client_certificate 0.00s teardown test_hedged_requests_parallel/test.py::test_combination2 0.00s teardown test_file_cluster/test.py::test_format_detection 0.00s setup test_jbod_load_balancing/test.py::test_jbod_load_balancing_least_used_detect_background_changes 0.00s teardown test_insert_distributed_async_send/test.py::test_insert_distributed_async_send_corrupted_big[1] 0.00s setup test_file_cluster/test.py::test_schema_inference 0.00s teardown test_jbod_load_balancing/test.py::test_jbod_load_balancing_least_used 0.00s teardown test_file_cluster/test.py::test_missing_file 0.00s teardown test_insert_distributed_async_send/test.py::test_insert_distributed_async_send_truncated_2[0] 0.00s teardown test_grpc_protocol_ssl/test.py::test_secure_channel 0.00s teardown test_insert_distributed_async_send/test.py::test_insert_distributed_async_send_corrupted_small[1] 0.00s teardown test_file_cluster/test.py::test_schema_inference 0.00s teardown test_jbod_load_balancing/test.py::test_jbod_load_balancing_least_used_detect_background_changes 0.00s teardown test_format_schema_on_server/test.py::test_protobuf_format_input 0.00s teardown test_jbod_load_balancing/test.py::test_jbod_load_balancing_least_used_next_disk 0.00s teardown test_file_cluster/test.py::test_non_existent_cluster 0.00s teardown test_format_schema_on_server/test.py::test_drop_capn_proto_format =========================== short test summary info ============================ PASSED test_jbod_load_balancing/test.py::test_jbod_load_balancing_least_used PASSED test_graphite_merge_tree/test.py::test_broken_partial_rollup PASSED test_external_http_authenticator/test.py::test_basic_auth_failed PASSED test_grpc_protocol_ssl/test.py::test_insecure_channel PASSED test_graphite_merge_tree/test.py::test_combined_rules PASSED test_jbod_load_balancing/test.py::test_jbod_load_balancing_least_used_detect_background_changes PASSED test_grpc_protocol_ssl/test.py::test_secure_channel PASSED test_format_schema_on_server/test.py::test_drop_cache_protobuf_format PASSED test_jbod_load_balancing/test.py::test_jbod_load_balancing_least_used_next_disk PASSED test_graphite_merge_tree/test.py::test_combined_rules_with_default PASSED test_grpc_protocol_ssl/test.py::test_wrong_client_certificate PASSED test_jbod_load_balancing/test.py::test_jbod_load_balancing_round_robin PASSED test_format_schema_on_server/test.py::test_drop_capn_proto_format PASSED test_graphite_merge_tree/test.py::test_multiple_output_blocks PASSED test_format_schema_on_server/test.py::test_protobuf_format_input PASSED test_keeper_client/test.py::test_base_commands PASSED test_keeper_client/test.py::test_big_family PASSED test_graphite_merge_tree/test.py::test_multiple_paths_and_versions PASSED test_keeper_client/test.py::test_delete_stale_backups PASSED test_format_schema_on_server/test.py::test_protobuf_format_output PASSED test_keeper_client/test.py::test_find_super_nodes PASSED test_keeper_client/test.py::test_four_letter_word_commands PASSED test_keeper_client/test.py::test_get_all_children_number PASSED test_keeper_client/test.py::test_quoted_argument_parsing PASSED test_external_http_authenticator/test.py::test_session_settings_from_auth_response PASSED test_keeper_client/test.py::test_rm_with_version PASSED test_keeper_client/test.py::test_rm_without_version PASSED test_insert_distributed_async_send/test.py::test_insert_distributed_async_send_corrupted_big[0] PASSED test_keeper_client/test.py::test_set_with_version PASSED test_file_cluster/test.py::test_count PASSED test_keeper_client/test.py::test_set_without_version PASSED test_external_http_authenticator/test.py::test_user_create_basic_auth_pass PASSED test_encrypted_disk/test.py::test_add_keys PASSED test_external_http_authenticator/test.py::test_user_from_config_basic_auth_pass PASSED test_file_cluster/test.py::test_format_detection PASSED test_file_cluster/test.py::test_missing_file PASSED test_file_cluster/test.py::test_no_such_files PASSED test_encrypted_disk/test.py::test_add_keys_with_id PASSED test_file_cluster/test.py::test_non_existent_cluster PASSED test_hedged_requests_parallel/test.py::test_combination1 PASSED test_insert_distributed_async_send/test.py::test_insert_distributed_async_send_corrupted_big[1] PASSED test_encrypted_disk/test.py::test_backup_restore[File-encrypted_policy-local_policy-False] PASSED test_file_cluster/test.py::test_schema_inference PASSED test_file_cluster/test.py::test_select_all PASSED test_graphite_merge_tree/test.py::test_path_dangling_pointer PASSED test_graphite_merge_tree/test.py::test_paths_not_matching_any_pattern PASSED test_encrypted_disk/test.py::test_backup_restore[File-encrypted_policy-local_policy-True] PASSED test_graphite_merge_tree/test.py::test_rollup_aggregation PASSED test_graphite_merge_tree/test.py::test_rollup_aggregation_2 PASSED test_encrypted_disk/test.py::test_backup_restore[File-local_policy-encrypted_policy-False] PASSED test_insert_distributed_async_send/test.py::test_insert_distributed_async_send_corrupted_small[0] PASSED test_graphite_merge_tree/test.py::test_rollup_versions PASSED test_encrypted_disk/test.py::test_backup_restore[File-s3_encrypted_default_path-encrypted_policy-False] PASSED test_http_and_readonly/test.py::test_http_get_is_readonly PASSED test_graphite_merge_tree/test.py::test_system_graphite_retentions PASSED test_graphite_merge_tree/test.py::test_wrong_rollup_config PASSED test_encrypted_disk/test.py::test_backup_restore[S3-encrypted_policy-encrypted_policy-False] PASSED test_hedged_requests_parallel/test.py::test_combination2 PASSED test_input_format_parallel_parsing_memory_tracking/test.py::test_memory_tracking_total PASSED test_insert_distributed_async_send/test.py::test_insert_distributed_async_send_corrupted_small[1] PASSED test_encrypted_disk/test.py::test_backup_restore[S3-encrypted_policy-s3_encrypted_default_path-False] PASSED test_fetch_partition_from_auxiliary_zookeeper/test.py::test_fetch_part_from_allowed_zookeeper[PART-2020-08-28-20200828_0_0_0] PASSED test_encrypted_disk/test.py::test_backup_restore[S3-s3_encrypted_default_path-encrypted_policy-False] PASSED test_fetch_partition_from_auxiliary_zookeeper/test.py::test_fetch_part_from_allowed_zookeeper[PARTITION-2020-08-27-2020-08-27] PASSED test_encrypted_disk/test.py::test_backup_restore[S3-s3_encrypted_default_path-s3_encrypted_default_path-False] PASSED test_encrypted_disk/test.py::test_encrypted_disk[encrypted_policy] PASSED test_insert_distributed_async_send/test.py::test_insert_distributed_async_send_different_header[0] PASSED test_encrypted_disk/test.py::test_encrypted_disk[encrypted_policy_key192b] PASSED test_encrypted_disk/test.py::test_encrypted_disk[local_policy] PASSED test_encrypted_disk/test.py::test_encrypted_disk[s3_policy] PASSED test_hedged_requests_parallel/test.py::test_query_with_no_data_to_sample PASSED test_drop_replica_with_auxiliary_zookeepers/test.py::test_drop_replica_in_auxiliary_zookeeper PASSED test_insert_distributed_async_send/test.py::test_insert_distributed_async_send_different_header[1] PASSED test_encrypted_disk/test.py::test_log_family PASSED test_http_native/test.py::test_http_native_returns_timezone PASSED test_encrypted_disk/test.py::test_migration_from_old_version[version_1be] PASSED test_insert_distributed_async_send/test.py::test_insert_distributed_async_send_success[0] PASSED test_encrypted_disk/test.py::test_migration_from_old_version[version_1le] PASSED test_encrypted_disk/test.py::test_migration_from_old_version[version_2] PASSED test_hedged_requests_parallel/test.py::test_send_data PASSED test_insert_distributed_async_send/test.py::test_insert_distributed_async_send_success[1] PASSED test_encrypted_disk/test.py::test_optimize_table[local_policy-disk_local_encrypted] PASSED test_encrypted_disk/test.py::test_optimize_table[s3_policy-disk_s3_encrypted] PASSED test_explain_estimates/test.py::test_explain_estimates PASSED test_insert_distributed_async_send/test.py::test_insert_distributed_async_send_truncated_1[0-0] PASSED test_encrypted_disk/test.py::test_part_move[local_policy-destination_disks0] PASSED test_hedged_requests_parallel/test.py::test_send_table_status_sleep PASSED test_encrypted_disk/test.py::test_part_move[s3_policy-destination_disks1] PASSED test_encrypted_disk/test.py::test_read_in_order PASSED test_insert_distributed_async_send/test.py::test_insert_distributed_async_send_truncated_1[0-1] PASSED test_jbod_ha/test.py::test_jbod_ha PASSED test_insert_distributed_async_send/test.py::test_insert_distributed_async_send_truncated_1[1-0] PASSED test_insert_distributed_async_send/test.py::test_insert_distributed_async_send_truncated_1[1-1] PASSED test_encrypted_disk/test.py::test_restart PASSED test_insert_distributed_async_send/test.py::test_insert_distributed_async_send_truncated_2[0] PASSED test_keeper_broken_logs/test.py::test_single_node_broken_log PASSED test_keeper_availability_zone/test.py::test_get_availability_zone PASSED test_insert_distributed_async_send/test.py::test_insert_distributed_async_send_truncated_2[1] PASSED test_keeper_memory_soft_limit/test.py::test_soft_limit_create PASSED test_keeper_incorrect_config/test.py::test_invalid_configs ======================= 100 passed in 162.29s (0:02:42) ========================